1997, 4 (2), 145-166 | RICHARD M. SHIFFRIN and MARK STEYVERS
A new model of recognition memory, REM (Retrieving Effectively from Memory), is introduced to explain phenomena in explicit and implicit, and episodic and semantic memory. The model assumes that each word is stored as an incomplete and error-prone vector of feature values. It calculates the probability that a test item is "old" based on these vectors and uses a default "old" response if this probability exceeds 0.5. The model successfully predicts several phenomena, including the list-strength effect, mirror effect, and normal-ROC slope effect, which have been challenging for existing models.
The REM model is based on the idea that memory consists of separate images, each represented as a vector of feature values. Storage involves a probabilistic process where features are stored with some error. Retrieval involves comparing the test item's vector to stored images and calculating likelihood ratios to determine if the item is "old" or "new." The model uses Bayesian decision theory to calculate the probability of an item being "old" based on these likelihood ratios.
The model is applied to basic recognition phenomena, including list strength, mirror effect, and normal-ROC slope effect. It is shown that the model can predict these phenomena with high accuracy, even though it is simplified. The model's assumptions allow for precise derivations and its structure is sufficient to predict the basic phenomena of recognition memory.
The model is extended to more complex and realistic versions, which are applied to the same set of recognition phenomena. The model's predictions are compared to existing models, which have struggled to explain these phenomena. The REM model is shown to provide a principled reason for using a particular functional form and is able to predict qualitative results that have been difficult to handle within existing models.
The model is also applied to natural language word frequency effects, where high-frequency words are recognized less well than low-frequency words. The model accounts for this by assuming that high-frequency words have more common feature values, which are less diagnostic and thus contribute less evidence in favor of an "old" response. The model's predictions for these effects are shown to be accurate.
Overall, the REM model provides a comprehensive framework for understanding recognition memory, successfully predicting several key phenomena that have been challenging for existing models. The model's simplicity and effectiveness make it a valuable tool for further research in memory and cognition.A new model of recognition memory, REM (Retrieving Effectively from Memory), is introduced to explain phenomena in explicit and implicit, and episodic and semantic memory. The model assumes that each word is stored as an incomplete and error-prone vector of feature values. It calculates the probability that a test item is "old" based on these vectors and uses a default "old" response if this probability exceeds 0.5. The model successfully predicts several phenomena, including the list-strength effect, mirror effect, and normal-ROC slope effect, which have been challenging for existing models.
The REM model is based on the idea that memory consists of separate images, each represented as a vector of feature values. Storage involves a probabilistic process where features are stored with some error. Retrieval involves comparing the test item's vector to stored images and calculating likelihood ratios to determine if the item is "old" or "new." The model uses Bayesian decision theory to calculate the probability of an item being "old" based on these likelihood ratios.
The model is applied to basic recognition phenomena, including list strength, mirror effect, and normal-ROC slope effect. It is shown that the model can predict these phenomena with high accuracy, even though it is simplified. The model's assumptions allow for precise derivations and its structure is sufficient to predict the basic phenomena of recognition memory.
The model is extended to more complex and realistic versions, which are applied to the same set of recognition phenomena. The model's predictions are compared to existing models, which have struggled to explain these phenomena. The REM model is shown to provide a principled reason for using a particular functional form and is able to predict qualitative results that have been difficult to handle within existing models.
The model is also applied to natural language word frequency effects, where high-frequency words are recognized less well than low-frequency words. The model accounts for this by assuming that high-frequency words have more common feature values, which are less diagnostic and thus contribute less evidence in favor of an "old" response. The model's predictions for these effects are shown to be accurate.
Overall, the REM model provides a comprehensive framework for understanding recognition memory, successfully predicting several key phenomena that have been challenging for existing models. The model's simplicity and effectiveness make it a valuable tool for further research in memory and cognition.