Does depth perception require vertical disparity detectors?

This paper represents a cool idea which turned out (probably) not to be true. There's a wealth of psychophysical evidence indicating that humans measure vertical disparity, and use this to calibrate oculomotor signals about eye position. This has been taken, without much thought, as "obviously" indicating that the brain therefore contains detectors tuned to a range of vertical disparities, and various physiological studies have looked for them (so far with little success, due to a range of interpretation problems). We realised that actually, the psychophysical performance could be explained if the brain only had access to the magnitude (not the sign) of vertical disparity. Essentially, this is because the sign has a predictable pattern, so you don't need to measure it explicitly. This is important because if all you need is the magnitude of vertical disparity, you can get this from a population of purely horizontal disparity detectors. For these detectors, vertical disparity could be deduced via its effect on binocular correlation. This seemed to us a very elegant solution for the brain to adopt. Most disparities are horizontal (see Read & Cumming 2004), so it would enable the brain to concentrate its detectors according to the statistics of natural viewing (clearly the efficient thing to do) while still being able to extract information from vertical disparities when they do occur. This also raised the exciting possibility of being able to mimic vertical-disparity illusions with binocular correlation -- stereopsis without disparity. Sadly, however, my attempts to produce these illusions failed, and I now believe that this idea is not correct. In 2010, I returned to this train of thought and developed a more sophisticated model which can deduce both magnitude and sign of vertical disparity from a population of purely horizontal disparity detectors.
ReadCumming06.pdf
File Size3.9 MiB
DateJanuary 16, 2012
Downloads2416
AuthorRead JCA, Cumming BG