The study by Zhang and Luck investigates the nature of working memory, specifically whether it stores a limited set of discrete, fixed-resolution representations or a flexible pool of resources. They use a short-term recall paradigm to measure both the number of items stored and the precision of each representation. The results show that when presented with more than a few simple objects, human observers store a high-resolution representation of a subset of the objects and retain no information about the others. Memory resolution varies over a narrow range, which can be explained by a small set of discrete, fixed-resolution representations rather than a general resource pool.
To further test these models, the authors conducted experiments that separately measured the probability of an item being in memory (Pm) and the precision of the representation (s.d.). The data supported a "slots + averaging" model, where observers attempt to maximize performance for the cued item by devoting more slots to it, leading to a slight improvement in precision on valid trials compared to neutral trials. However, the precision did not improve for uncued items, indicating that resources are not allocated to them.
The authors also used a masking manipulation to assess the encoding process, finding that the creation of durable memory representations involves an all-or-none step, suggesting that an all-or-none, fixed-resolution encoding process is required. Finally, they generalized their findings to shapes, showing similar results, which supports the idea that the observed pattern of results is robust across different stimulus dimensions.
Overall, the study provides a quantitative account of memory performance, showing that a model with a small set of discrete, fixed-resolution representations can explain memory capacity limits across various experimental manipulations.The study by Zhang and Luck investigates the nature of working memory, specifically whether it stores a limited set of discrete, fixed-resolution representations or a flexible pool of resources. They use a short-term recall paradigm to measure both the number of items stored and the precision of each representation. The results show that when presented with more than a few simple objects, human observers store a high-resolution representation of a subset of the objects and retain no information about the others. Memory resolution varies over a narrow range, which can be explained by a small set of discrete, fixed-resolution representations rather than a general resource pool.
To further test these models, the authors conducted experiments that separately measured the probability of an item being in memory (Pm) and the precision of the representation (s.d.). The data supported a "slots + averaging" model, where observers attempt to maximize performance for the cued item by devoting more slots to it, leading to a slight improvement in precision on valid trials compared to neutral trials. However, the precision did not improve for uncued items, indicating that resources are not allocated to them.
The authors also used a masking manipulation to assess the encoding process, finding that the creation of durable memory representations involves an all-or-none step, suggesting that an all-or-none, fixed-resolution encoding process is required. Finally, they generalized their findings to shapes, showing similar results, which supports the idea that the observed pattern of results is robust across different stimulus dimensions.
Overall, the study provides a quantitative account of memory performance, showing that a model with a small set of discrete, fixed-resolution representations can explain memory capacity limits across various experimental manipulations.