The simple acts of recognizing a friend's face, or noticing the color of their hair, require an enormous amount of work by our brains. My lab focuses on understanding this work. We use the methods of cognitive neuroscience to characterize precisely how neurons in the human brain support visual perception. Most of our experiments combine behavioral measurements of perception with functional MRI measurements of neural activity and mathematical modeling.
One main topic under investigation is plasticity in the visual system. We are currently investigating the extent to which the adult visual system can modify itself through visual adaptation and learning. The adaptation work investigates how the visual system responds to changes in the environment, using a novel "altered reality" technology, that allows users to see in a world that is just like ours, but that differs in a controlled way. We are currently studying for example, how the visual system changes when we remove (or enhance) all vertical lines from what subjects see, over periods of minutes, hours, and even multiple days.
The learning work builds upon the fact that with training, the adult human visual system can improve dramatically in almost any task. Radiologists, for example, can see patterns in x-ray images that are invisible to the untrained eye. One recent study from our lab investigated the changes in the brain that underlie this expertise. Other studies examine training on simpler tasks, such as detecting the presence of a very faint line on an otherwise blank screen. Training these simple tasks has allowed us to measure changes in some of the earliest stages of vision in the cortex, regions that were once viewed as lacking plasticity in the adult.
A long-standing goal of the lab has been to develop cutting edge tools for use in studying human vision. To this end, we helped pioneer functional MR imaging of human visual cortex, and we continue to expand its use. We are currently pushing the temporal limits of the method to measure the timing of cognitive processes with accuracy at or below 100 msec. We also recently develop an altered reality system to allow the first laboratory studies of long-term visual adaptation that targets early visual processing.