. | . |
Modeling How We See Natural Scenes
Washington DC (SPX) May 27, 2008 Sophisticated mathematical modeling methods and a "CatCam" that captures feline-centric video of a forest are two elements of a new effort to explain how the brain's visual circuitry processes real scenes. The new model of the neural responses of a major visual-processing brain region promises to significantly advance understanding of vision. Valerio Mante and colleagues published a description of their model and its properties in an article in the May 22, 2008, issue of the journal Neuron, published by Cell Press. The researchers sought to develop the new model because until now, studies of the visual system have used simple stimuli such as dots, bars and gratings. "Such simple, artificial stimuli present overwhelming advantages in terms of experimental control: their simple visual features can be tailored to isolate and study the function of one or few of the several mechanisms shaping the responses of visual neurons," wrote the researchers. "Ultimately, however, we need to understand how neurons respond not only to these simple stimuli but also to image sequences that are arbitrarily complex, including those encountered in natural vision. The visual system evolved while viewing complex scenes, and its function may be uniquely adapted to the structure of natural images," they wrote. Specifically, the researchers sought to model the neuronal response the lateral geniculate nucleus (LGN) in the thalamus, a brain region that processes raw visual signals received from the retina. To gather data for the model, they first recorded from LGN neurons in anesthetized cats, as the cats were presented with images of drifting gratings of different sizes, locations and spatial and temporal frequency. They also varied luminance and contrast of the images. From these data, they created a mathematical model that aimed to describe how these neurons respond and adapt to such complex, changing stimuli. Their ultimate goal was to create a model that would describe not just neural response to the gratings, but to the complexities of natural scenes. To test their model, they presented cats with two kinds of natural scenes, while recording from LGN neurons. One of these scenes was video recorded from a "CatCam" mounted on the head of a cat as it roamed through a forest. The other was short sequences from the cartoon Disney movie Tarzan. The researchers found that their model predicted "much of the responses to complex, rapidly changing stimuli... Specifically, the model captures how these responses are affected by changes in luminance and contrast level, overcoming many of the shortcomings of simpler models," they wrote. "Even though our model does not capture the operation of all known nonlinear mechanisms, it promises to be a useful tool to understand the computations performed by the early visual system," they wrote. Mante and colleagues have provided "a long-needed bridge between the two stimulus worlds," wrote Garrett Stanley of the Georgia Institute of Technology, in a preview of the paper in the same issue of Neuron. "By creating an encoding model from a set of experiments involving sinusoidal gratings at different mean luminances and contrasts, and subsequently demonstrating that this model predicts the neuronal response to an entirely different class of visual stimuli based on the visual scene alone, Mante, et al. have made this problem general and provided a powerful description of the encoding properties of the pathway," wrote Stanley. Community Email This Article Comment On This Article Share This Article With Planet Earth
Related Links Cell Press All About Human Beings and How We Got To Be Here
Incense Is Psychoactive: Scientists Identify The Biology Behind The Ceremony Baltimore MD (SPX) May 22, 2008 Religious leaders have contended for millennia that burning incense is good for the soul. Now, biologists have learned that it is good for our brains too. |
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2007 - SpaceDaily.AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement |