A device that can ‘read’ the brain and produce a picture of a person’s visual experience could soon become a reality. Online in Nature this week researchers describe a model that defines the relationship between visual stimuli and functional magnetic resonance imaging (fMRI) activity in early visual areas, making it possible to identify specific images seen by an observer.
Jack L. Gallant and colleagues at the University of California, US, developed ‘receptive-field’ models that combine measures of orientation, frequency and dimensions of space in the brain in order to predict novel, complex and natural objects being viewed by research subjects. Previous attempts to interpret visual experiences through fMRI have succeeded only in decoding much simpler information.
The researchers suggest that, in future, this model-based approach to decoding brain signals could be used to track mental processes such as attention, investigate differences in perception among people, and perhaps even provide access to the visual content of phenomena such as dreams and imagery.
Recent Hot Topics
Sign up for Nature Research e-alerts to get the lastest research in your inbox every week.