Here, we returned to the question of how to extract stimulus disparity from a population of binocular neurons such as seem to exist in primary visual cortex. Once again, we used the energy model of Ohzawa, DeAngelis and Freeman (199). The output of this model depends on the receptive fields in the two eyes. The physiological literature shows that the receptive fields are usually well-described by Gabor functions, of similar spatial frequency and orientation tuning, but differing in their phase and/or position on the retina. It makes sense to have receptive fields which differ in retinal position -- you can view these as "position-disparity detectors" sensing objects at different positions in space. But we wondered why you find receptive fields differing in phase, "phase-disparity detectors". These do work as disparity detectors, but they seem suboptimal -- they respond best to retinal patterns that are never generated by real objects. If the brain only contained these phase-disparity detectors, instead of the more suitable position-disparity detectors, you might reckon there was some developmental constraint which meant that the brain just couldn't wire up position-disparity detectors. But since it clearly can generate position-disparity, what's the point of having phase-disparity detectors as well?
Phase-disparity detectors respond best to different patterns of light and dark in the two eyes. Real objects would always project the same pattern in both eyes, so phase-disparity detectors don't respond best to real objects. They respond best to unrelated regions of the visual scene, i.e. where the regions of the two retina viewed by this particular binocular neuron do not correspond to the same object in space. In other words, they respond best to false matches. This could be very useful, because position-disparity detectors are plagued by false matches. It's difficult to interpret their response, as a strong response does not necessarily indicate that there is an object at the disparity to which the detector is tuned; it could be a false match. We realised that you could solve this problem by using the phase-disparity detectors as "lie detectors". For each possible match provided by the position-disparity detectors, the pattern of the response of the corresponding phase-disparity detectors reveals whether the match is true or false.
At the moment, this is just an idea; we don't know if this is really how the brain uses phase-disparity detectors. We hope this paper will stimulate experimental investigations which will either confirm or rule out our suggestion, as well as prompting further consideration about the role of phase disparity in the brain.
Phase-disparity detectors respond best to different patterns of light and dark in the two eyes. Real objects would always project the same pattern in both eyes, so phase-disparity detectors don't respond best to real objects. They respond best to unrelated regions of the visual scene, i.e. where the regions of the two retina viewed by this particular binocular neuron do not correspond to the same object in space. In other words, they respond best to false matches. This could be very useful, because position-disparity detectors are plagued by false matches. It's difficult to interpret their response, as a strong response does not necessarily indicate that there is an object at the disparity to which the detector is tuned; it could be a false match. We realised that you could solve this problem by using the phase-disparity detectors as "lie detectors". For each possible match provided by the position-disparity detectors, the pattern of the response of the corresponding phase-disparity detectors reveals whether the match is true or false.
At the moment, this is just an idea; we don't know if this is really how the brain uses phase-disparity detectors. We hope this paper will stimulate experimental investigations which will either confirm or rule out our suggestion, as well as prompting further consideration about the role of phase disparity in the brain.