Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Vision is our most developed sense and unsurprisingly a substantial amount of brain processing is devoted to it, with over half the primate brain involved in vision-related processing. A first step in understanding the nature of this processing involves an appreciation of the capabilities and specialization that define human vision. Such an understanding of our overall visual strengths and weaknesses provides a framework within which the emerging neurophysiology and neuroanatomy of different parts of the pathway can be best understood.
Early visual processing of spatial form has to contend with the fact that information that first impinges on receptors in the eye must be transmitted to the main processing sites in the brain. There are transmission bandwidth considerations to ensure the optic nerve/optic track remains of manageable size. What this means is that of all the information contained in a retinal image, only a fraction of it can be efficiently transmitted and processed; hard choices have to be made. This leads us into the concept of sensory filtering and information compression. The fact that individual cells have limited dynamic ranges also leads to a consideration of distributed parallel processing. These notions, namely filtering, compression, and parallel processing, are important principles in sensory analysis in general and visual analysis in particular.
It has only been relatively recently that quantitative methods have been developed to help quantify human vision. These developed from the realization of the importance of contrast and object size (or spatial frequency). That contrast is fundamental to the visibility of objects had been known for a long time. Bouguer developed the first quantitative measurements of contrast sensitivity, using the visibility of the shadow of a rod produced by a candle at a specified distance away. That the size of objects (more correctly their retinal image size) also determines their visibility has also been appreciated for a long time, being the basis of acuity testing in the last century. The realization that the relationship between these two variables could provide a more complete description of human vision was a logical consequence of the emerging work on optical transfer functions and was accelerated by the need for calibrated photographic surveillance methods during World War II. Amid great controversy, particularly from certain optical physicists of the time, Campbell and co-workers used this approach to great effect to provide a quantitative description of the capability of the human visual system, which in turn provided a starting point for considerations of optical quality of the human eye and its neural limitations. This function, often referred to as the spatial contrast sensitivity function, provided the first quantitative description of our “window of visibility” and makes explicit the information that our visual system processes best.
Figure 32.1 displays a sinusoidal grating stimulus that was first developed by Fergus Campbell and John Robson at Cambridge in which the spatial frequency (number of cycles subtended in one degree at the eye) is modulated along the abscissa (low spatial frequencies on the left, high spatial frequencies on the right) and contrast is modulated along the ordinate (low contrasts at the top, high contrasts at the bottom). The reader can observe an inverted U-shaped region in which the stripes are visible – this is a demonstration of our window of visibility.
In Figure 32.2 laboratory measurement of the contrast sensitivity function is displayed for a normal observer; its overall shape mirrors that seen in Figure 32.1 . Here, contrast sensitivity, the reciprocal of the contrast needed for threshold detection, is plotted against the spatial frequency of a luminance-defined sinusoidal grating (filled circles), in cycles per degree of angle subtended at the eye. Our window of visibility is defined by an inverted U-shaped function that shows we are best at detecting objects subtending about a half to a third of a degree (approximately half the width of one's finger nail at arms length). Our sensitivity for detecting smaller objects (i.e. of higher spatial frequency) progressively declines, as it also does for detecting objects larger (i.e. of lower spatial frequency) than one degree. Information in the very low contrast range, particularly if it is of very low or very high spatial frequency, is not processed at all by our visual system; human vision is specialized in the intermediate spatial frequency range and within this limited range our contrast sensitivity is good to one part in 500 (i.e. 0.2 percent) under monocular conditions. Other animals have visual capabilities adapted to match their particular needs; the cat is better than the human at detecting low spatial frequencies and the falcon is better than the human at detecting high spatial frequencies. These benefits, however, come with associated costs. The cat is much worse than us at detecting high spatial frequencies and the falcon much worse than us at detecting low spatial frequencies. In other words the size of the window of visibility is approximately the same for different animals but shifts along the abscissa (i.e. the spatial frequency axis) to best suit the needs of the animal. For the cat, this means things within pouncing distance (resulting in a shift of the function to lower spatial frequencies) and for the falcon, for detecting ground prey when hovering high in the sky (resulting in a shift of the function to higher spatial frequencies).
Why are we poor at detecting objects of very high and very low spatial frequency? In terms of the former, an obvious possibility is that the retinal image may not contain this information in the first place due to optical (i.e. cornea, lens) losses. Campbell & Green, using a laser interferometric technique to by-pass the optics, showed that the fall-off of contrast sensitivity at high spatial frequencies only had a small (about one-third) optical component. They argued that two-thirds of the sensitivity loss at high spatial frequencies was neural in nature, due to limitations at the retina/brain ( Box 32.1 ). More recently it has been suggested that the quantal fluctuations in light itself may account for the loss at high spatial frequency. Although not yet resolved, in light of the discussion below in the section on luminance effects, the original suggestion that this fall-off is neural in nature seems more likely. The sensitivity loss at low spatial frequencies only occurs for sinusoidal stimuli (i.e. as opposed to squared-edged stimuli, as shown in Figure 32.1 ) and many different explanations have been put forward including, relative image stabilization, a consequence of reduced cycles or lateral inhibition. None of these hold up to scrutiny; providing a comparable temporal stimulation and having the same number of spatial cycles does not eliminate the low spatial frequency decline in sensitivity. The lateral inhibition explanation can also be rejected for reasons that are outlined in the section below on the relation to single cells. Its explanation has proved elusive.
A contrast sensitivity loss at high spatial frequencies may be optical or neural in nature.
Neurons at various levels of the visual pathway exhibit spatially overlapping excitatory and inhibitory receptive field properties that endow them with size or spatial frequency dependence, as illustrated in Figure 32.5 . This size selectivity becomes greater as one progresses from retina to cortex owing to the increased strength of the antagonistic surround. These neurons also exhibit contrast thresholds, quasi-linear contrast response regions and contrast saturation responses. Indeed, efforts to outline the optimum spatial frequency response of a sample of contrast-sensitive neurons in the monkey have provided a distribution similar to that of the human behavioral contrast sensitivity function. This can be seen in Figure 32.3 from the histogram of the distribution of cellular contrast sensitivities in monkey V1 relative to human behavioral sensitivities. The contrast sensitivity function represents the envelope of all the contrast-responding cells with the most sensitive ones defining the threshold limit.
Visual cortical neurons have sufficiently strong surrounds to their classical receptive fields to make them selective over a limited part of the spatial frequency range; an example is shown in Figure 32.4 . Thus different populations of cortical cells underlie the contrast sensitivity curve at different locations along the spatial frequency axis; the overall contrast sensitivity curve being the envelope of the sensitivity of the most sensitive cortical cells. This is diagrammatically illustrated in Figure 32.5 .
Psychophysical supporting evidence comes from the finding that adaptation or prolonged viewing of stimuli of one particular spatial frequency desensitizes responses for only stimuli of a similar spatial frequency, also suggesting that the overall curve is composed of a number of more discrete “channels”. At one time it was thought that the low spatial frequency fall-off in sensitivity was a reflection of the surround inhibition of the largest (or lowest spatial frequency-tuned) mechanism whose peak was located around 1 c/deg. There is now ample evidence to doubt this; individual spatial frequency selective mechanisms have been reported down to as low as 0.2 c/deg. Thus individual cortical mechanisms extend across the full extent of the visible spatial frequency range. The reason why the lower spatial frequency mechanisms are less sensitive is unknown but one possibility is that it may be due to a purely retinal cause; larger receptive fields must inevitably contain more inactive rods (contributing no signal but possibly contributing noise) under photopic conditions.
In the retina there are morphologically and functionally different populations of ganglion cells two of which, the parasol (magno; 10%) and midget (parvo; 80%) retinal ganglion cells send their afferents along the retino-geniculate pathway. These projections are kept separate in the geniculate terminating in the magno- and parvo-cellular layers respectively (so called M-cells and P-cells). The afferents of these two geniculate cell types are sent to different layers of 4C of the visual cortex (i.e. 4C alpha and 4C beta) and ultimately make a dominant (though not exclusive) contribution to the dorsal and ventral extrastriate streams respectively.
Become a Clinical Tree membership for Full access and enjoy Unlimited articles
If you are a member. Log in here