Why Some Memories Stick – Neuroscience News

by thinkia.org.in
0 comment


Summary: Researchers developed a computational model revealing why certain experiences become memorable while others are forgotten. Their study suggests that our brains are more likely to remember experiences that are difficult to explain or interpret.

Using a combination of this model and behavioral experiments, they found that images which are harder for the model to reconstruct were more frequently recalled by participants. This insight could advance our understanding of human memory and aid in designing AI systems with more efficient memory processes.

Key Facts:

  1. The study shows that unpredictable or hard-to-explain experiences are more likely to be remembered.
  2. A computational model used in the study linked the difficulty of image reconstruction with higher memorability.
  3. The research could influence future AI development, aiming to emulate human-like memory systems.

Source: Yale

 The human brain filters through a flood of experiences to create specific memories. Why do some of the experiences in this deluge of sensory information become “memorable,” while most are discarded by the brain?

A computational model and behavioral study developed by Yale scientists suggests a new clue to this age-old question, they report in the journal Nature Human Behavior. 

“The mind prioritizes remembering things that it is not able to explain very well,” said Ilker Yildirim, an assistant professor of psychology in Yale’s Faculty of Arts and Sciences and senior author of the paper.

The Yale team found that the harder it was for the computational model to reconstruct an image, the more likely the image would be remembered by the participants. Credit: Neuroscience News

“If a scene is predictable, and not surprising, it might be ignored.”

For example, a person may be briefly confused by the presence of a fire hydrant in a remote natural environment, making the image difficult to interpret, and therefore more memorable.

“Our study explored the question of which visual information is memorable by pairing a computational model of scene complexity with a behavioral study,” said Yildirim.

For the study, which was led by Yildirim and John Lafferty, the John C. Malone Professor of Statistics and Data Science at Yale, the researchers developed a computational model that addressed two steps in memory formation — the compression of visual signals and their reconstruction. 

Based on this model, they designed a series of experiments in which people were asked if they remembered specific images from a sequence of natural images shown in rapid succession. 

The Yale team found that the harder it was for the computational model to reconstruct an image, the more likely the image would be remembered by the participants.

“We used an AI model to try to shed light on perception of scenes by people — this understanding could help in the development of more efficient memory systems for AI in the future,” said Lafferty, who is also the director of the Center for Neurocomputation and Machine Intelligence at the Wu Tsai Institute at Yale. 

Former Yale graduate students Qi Lin (Psychology) and Zifan Lin (Statistics and Data Science) are co-first authors of the paper.

About this memory research news

Author: Bess Connolly
Source: Yale
Contact: Bess Connolly – Yale
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Images with harder-to-reconstruct visual representations leave stronger memory traces” by Ilker Yildirim et al. Nature Human Behavior


Abstract

Images with harder-to-reconstruct visual representations leave stronger memory traces

Much of what we remember is not because of intentional selection, but simply a by-product of perceiving.

This raises a foundational question about the architecture of the mind: how does perception interface with and influence memory?

Here, inspired by a classic proposal relating perceptual processing to memory durability, the level-of-processing theory, we present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals from this model predict how well images are encoded into memory.

In an open memorability dataset of scene images, we show that reconstruction error not only explains memory accuracy, but also response latencies during retrieval, subsuming, in the latter case, all of the variance explained by powerful vision-only models.

We also confirm a prediction of this account with ‘model-driven psychophysics’.

This work establishes reconstruction error as an important signal interfacing perception and memory, possibly through adaptive modulation of perceptual processing.

You may also like

Thinkia is a professional platform where we provide informative content like current world news, all types of educational content, health awareness, food awareness, travel awareness, ideas and tips. We hope you like all the content provided by us.

Editors' Picks

Latest Posts

Copyright © 2024 | Thinkia | All Right Reserved