1997, 4 (2), 145-166 | RICHARD M. SHIFFRIN and MARK STEYVERS
The article introduces a new model of recognition memory, called REM (Retrieving Effectively from Memory), which is part of a broader theory aimed at predicting phenomena of explicit and implicit, episodic and generic memory. The model assumes that each word is stored as a separate episodic image, consisting of a vector of feature values, with each value representing the presence or absence of a specific feature. During storage, each feature has a probability of being stored, and if stored, it is copied correctly with probability \( c \) or randomly chosen from a geometric distribution with parameter \( g \).
The model predicts several key phenomena in recognition memory, including the list strength effect, the mirror effect, and the normal ROC slope (NRS) effect. The list strength effect refers to the fact that strengthening some list items does not harm and may help the recognition of other items. The mirror effect is observed when performance is lower for longer lists, with a drop in hit rate and an increase in false alarm rate. The NRS effect is the observation that the ratio of the spread of the distractor distribution to the spread of the target distribution is less than one and does not change significantly with variations in list length, strength, and word frequency.
The authors demonstrate that REM, both in its basic form and more complex versions, produces excellent qualitative predictions for these phenomena. The model is applied to various paradigms, such as cued recall, free recall, and associative recognition, and is shown to be effective in predicting outcomes that have been difficult for existing models to explain. The article also discusses the limitations of the simplified version of REM and suggests extensions to address more complex scenarios.The article introduces a new model of recognition memory, called REM (Retrieving Effectively from Memory), which is part of a broader theory aimed at predicting phenomena of explicit and implicit, episodic and generic memory. The model assumes that each word is stored as a separate episodic image, consisting of a vector of feature values, with each value representing the presence or absence of a specific feature. During storage, each feature has a probability of being stored, and if stored, it is copied correctly with probability \( c \) or randomly chosen from a geometric distribution with parameter \( g \).
The model predicts several key phenomena in recognition memory, including the list strength effect, the mirror effect, and the normal ROC slope (NRS) effect. The list strength effect refers to the fact that strengthening some list items does not harm and may help the recognition of other items. The mirror effect is observed when performance is lower for longer lists, with a drop in hit rate and an increase in false alarm rate. The NRS effect is the observation that the ratio of the spread of the distractor distribution to the spread of the target distribution is less than one and does not change significantly with variations in list length, strength, and word frequency.
The authors demonstrate that REM, both in its basic form and more complex versions, produces excellent qualitative predictions for these phenomena. The model is applied to various paradigms, such as cued recall, free recall, and associative recognition, and is shown to be effective in predicting outcomes that have been difficult for existing models to explain. The article also discusses the limitations of the simplified version of REM and suggests extensions to address more complex scenarios.