10:10am - 10:40amSemantic memory as a computationally free side-effect of sparse distributed generative episodic memory
Gerard John Rinkus
Neurithmic Systems, United States of America
By generative model, we mean a model with sufficient parameters to represent the deep statistical structure (not just pairwise, but ideally, statistics of all orders present) of an input domain (in contrast to a discriminative model whose goal is to learn only enough information to classify inputs). These higher-order statistics include not just class information, but more generally, the full similarity structure, over the inputs, and constitute the basis for what we call semantic memory (SM). A generative model can be run “in reverse” to produce (in general, novel) plausible (likely) exemplars. By episodic memory (EM), we mean (typically, rich detailed) memories of specific experiences, which, by definition, are formed with single trials in the flow of the experience, apparently with no concurrent goal of learning the class of the experience, or even its similarity relations with other experiences. In a classical storage model of EM, where all inputs (experiences) are stored in full detail, all statistics of the input set are (at least implicitly) retained. This allows retrieval of the precise inputs, but also computations over the stored EM traces, in principle, producing any higher-order statistic of the input set, i.e., any output viewable as the operation of SM.
A key question is: how are the EM traces stored? If they are stored in localist fashion, i.e., wherein the traces of the individual inputs are disjoint, then any higher-order statistic must be computed either at retrieval time or sometime after storage and before retrieval. This “pre-computational” view is essentially consistent with the still-preponderant batch learning paradigm of machine learning. In either case, explicit computational work must be done to produce SM from EM, i.e., additional work beyond the work of storing the EM traces themselves. However, suppose instead that EM traces are stored in distributed fashion, and more specifically, as sparse distributed representations (SDRs), i.e., each individual input is stored as a relatively small subset of coactive neurons [a kind of cell assembly (CA)], chosen from a much larger field of such. And suppose further that there exists an on-line, single-trial learning mechanism (algorithm) able to cause more similar inputs to be assigned to more highly intersecting SDRs [my prior work (1996, 2010, 2014) describes one, which is moreover not optimization based]. In this case, the (in principle, full) similarity structure over all the inputs is embedded in the intersection structure of the CAs in the act of storing each EM trace. In other words, no additional work beyond the work of simply storing the EM traces themselves is needed in order to produce the physical representations of the statistics that constitute SM. Thus, SM is physically superposed with EM and, crucially vis-à-vis explaining the efficiency of biological learning and cognition, SM is produced as a computationally free side-effect of the operation of EM. Depending on the specificity of subsequent cues, such a model can output verbatim memories (i.e., episodic recall) but allows for the type of semantic (similarity-based) substitutions (e.g., confabulations) that GEM wants to explain.
10:40am - 11:10amA Generative Model of Memory Construction and Consolidation
Eleanor Spens, Neil Burgess
Institute of Cognitive Neuroscience, University College London, United Kingdom
Human episodic memories are (re)constructed, combining unique features with schema-based predictions, and share neural substrates with imagination. They also show systematic schema-based distortions that increase with consolidation. We present a computational model in which hippocampal replay (from an autoassociative network) trains generative models (variational autoencoders) in neocortex to (re)create sensory experiences via latent variable representations. These generative models learn to capture the probability distributions underlying experiences, or ‘schemas’; this enables not just efficient recall, in which the model reconstructs memories without the need to store them individually, but also imagination (by sampling from the latent variable distributions), inference (by using the learned statistics of experience to predict the values of unseen variables), and semantic memory. Simulations using large image datasets reflect the effects of memory age and hippocampal lesions and the slow learning of statistical structure in agreement with previous models of consolidation (Complementary Learning Systems and Multiple Trace Theory), but also build on generative models of perception and memory (e.g. Fayyaz et al., 2022) to explain imagination, inference, schema-based distortions, and continual representation learning in memory. Critically, the model suggests how unique and predictable elements of memories are stored and reconstructed by efficiently combining both hippocampal and neocortical systems, optimising the use of limited hippocampal storage. Finally, the model can be extended to sequential stimuli, including language, and multiple neocortical networks could be trained, including those with latent variable representations in entorhinal, medial prefrontal, and anterolateral temporal cortices. Overall, we believe hippocampal replay training neocortical generative models provides a comprehensive account of memory construction and consolidation.
11:10am - 11:40amDo activations and representations differ during successful retrieval from episodic vs. semantic memory?
Roni Tibon1,2, Andrea Greve2, Alex Quent2, Gina Humphreys2,3, Rik Henson2,4
1University of Nottingham, United Kingdom; 2MRC Cognition and Brain Sciences Unit, University of Cambridge; 3Institute of Science and Technology for Brain-Inspired Intelligence (ISTBI), Fudan University; 4Department of Psychiatry, University of Cambridge
The distinction between episodic and semantic memory is supported by a large corpus of neuropsychological studies. However, neuroimaging data show considerable overlap between brain regions that are involved in semantic and episodic processing. While this overlap might indicate similar processing, it might also result from confounded task designs. In this registered (accepted Stage 1) fMRI study, we aimed to distinguish retrieval of semantic and episodic memories using closely matched tasks, in which episodic and semantic processes are minimally confounded.
In our task, pictures of logos were paired with their corresponding brand’s name and with an unrelated brand. Participants completed two paired-associate cued-recall tasks: in the episodic task, they studied unrelated logo-name pairs, and then viewed logos and recalled the associated studied brand. In the semantic task, a similar recall procedure ensued, but participants retrieved the associated brand from their prior knowledge. The cued-recall phase was then followed by an evaluation phase, in which the brand name was presented and information about the episodically/semantically associated item was retrieved.
To estimate differential processing of episodic and semantic memories, we contrasted activation for recall success vs. failure trials (the “recall success” effect), and predicted that some areas will show a greater recall success effect in the semantic vs. episodic task (e.g., anterior temporal lobe), while others will show the opposite pattern (e.g., hippocampus). To estimate differential content effects (representation), we examined item-specific pattern similarity between recall and evaluation, and contrasted the similarity for trials referring to the same episodic/semantic instant with trials referring to different instances.
Contrary to our prediction, our analyses showed robust evidence against differential processing or representation (conclusive support for the null; BF01>10) in anatomically defined regions of interest. Taken together, our study shows that semantic and episodic memories are processed and represented in a highly similar manner.
|