Contrast thresholds reveal different visual masking functions in humans and praying mantises by Tarawneh G, Nityananda V, Rosner R, Errington S, Herbert W, Arranz-Paraíso S, Busby N, Tampin J, Read JCA, Serrano-Pedraza I, TarawnehNityanandaRosnerErringtonHerbertArranzParaisoBusbyTampinSerranoSerranoPedraza2018.pdf (2.3 MiB) - Recently, we showed a novel property of the Hassenstein–Reichardt detector, namely that insect motion detection can be masked by ‘undetectable’ noise, i.e. visual noise presented at spatial frequencies at which coherently moving gratings do not elicit a response (Tarawneh et al., 2017). That study compared the responses of human and insect motion detectors using different ways of quantifying masking (contrast threshold in humans and masking tuning function in insects). In addition, some adjustments in experimental procedure, such as presenting the stimulus at a short viewing distance, were necessary to elicit a response in insects. These differences offer alternative explanations for the observed difference between human and insect responses to visual motion noise. Here, we report the results of new masking experiments in which we test whether differences in experimental paradigm and stimulus presentation between humans and insects can account for the undetectable noise effect reported earlier. We obtained contrast thresholds at two signal and two noise frequencies in both humans and praying mantises (Sphodromantis lineola), and compared contrast threshold differences when noise has the same versus different spatial frequency as the signal. Furthermore, we investigated whether differences in viewing geometry had any qualitative impact on the results. Consistent with our earlier finding, differences in contrast threshold show that visual noise masks much more effectively when presented at signal spatial frequency in humans (compared to a lower or higher spatial frequency), while in insects, noise is roughly equivalently effective when presented at either the signal spatial frequency or lower (compared to a higher spatial frequency). The characteristic difference between human and insect responses was unaffected by correcting for the stimulus distortion caused by short viewing distances in insects. These findings constitute stronger evidence that the undetectable noise effect reported earlier is a genuine difference between human and insect motion processing, and not an artefact caused by differences in experimental paradigms.
Invisible noise obscures visible signal in insect motion detection by Tarawneh G, Nityananda V, Rosner R, Errington S, Herbert W, Cumming BG, Read JCA, Serrano-Pedraza I, TarawnehNityanandaRosnerErringtonHerbertCummingReadSerranoPedraza2017.pdf (2.6 MiB) - The motion energy model is the standard account of motion detection in animals from beetles to
humans. Despite this common basis, we show here that a difference in the early stages of visual
processing between mammals and insects leads this model to make radically different behavioural
predictions. In insects, early filtering is spatially lowpass, which makes the surprising prediction that
motion detection can be impaired by “invisible” noise, i.e. noise at a spatial frequency that elicits
no response when presented on its own as a signal. We confirm this prediction using the optomotor
response of praying mantis Sphodromantis lineola. This does not occur in mammals, where spatially
bandpass early filtering means that linear systems techniques, such as deriving channel sensitivity from
masking functions, remain approximately valid. Counter-intuitive effects such as masking by invisible
noise may occur in neural circuits wherever a nonlinearity is followed by a difference operation.
Visual Perception: Neural Networks for Stereopsis by Read JCA, Cumming BG, ReadCumming2017.pdf (0.4 MiB) - This is a comment article on Welchman and Goncalves (2017): ‘‘What not’’ detectors help the brain see in depth. Curr. Biol. 27, 1403–1412.
Neurons in Striate Cortex Signal Disparity in Half-Matched Random-Dot Stereograms by Henriksen S, Read JCA, Cumming BG, HenriksenReadCumming2016.PDF (0.8 MiB) - Human stereopsis can operate in dense “cyclopean” images containing no monocular objects. This is believed to depend on the computation of binocular correlation by neurons in primary visual cortex (V1). The observation that humans perceive depth in half-matched random-dot stereograms, although these stimuli have no net correlation, has led to the proposition that human depth perception in these stimuli depends on a distinct “matching” computation possibly performed in extrastriate cortex. However, recording from disparity-selective neurons in V1 of fixating monkeys, we found that they are in fact able to signal disparity in half-matched stimuli. We present a simple model that explains these results. This reinstates the view that disparity-selective neurons in V1 provide the initial substrate for perception in dense cyclopean stimuli, and strongly suggests that separate correlation and matching computations are not necessary to explain existing data on mixed correlation stereograms.
Visual Perception: A Novel Difference Channel in Binocular Vision by Henriksen S, Read JCA, HenriksenRead2016.pdf (0.9 MiB) - A "Dispatch", i.e. a comment article, about May & Zhaoping 2016 "Efficient Coding Theory Predicts a Tilt Aftereffect from Viewing Untilted Patterns". Our summary: "A recent study provides compelling evidence that binocular vision uses two separate channels; one channel adds the images from the two eyes, and the other subtracts them. "
A single mechanism can account for human perception of depth in mixed correlation random dot stereograms by Henriksen S, Cumming BG, Read JCA, HenriksenCummingRead2016.PDF (0.7 MiB) - Relating neural activity to perception is one of the most challenging tasks in neuroscience.
Stereopsis—the ability of many animals to see in stereoscopic 3D—is a particularly tractable
problem because the computational and geometric challenges faced by the brain are
very well understood. In essence, the brain has to work out which elements in the left eye’s
image correspond to which in the right image. This process is believed to begin in primary
visual cortex (V1). It has long been believed that neurons in V1 achieve this by computing
the correlation between small patches of each eye’s image. However, recent psychophysical
experiments have reported depth perception in stimuli for which this correlation is zero,
suggesting that another mechanism might be responsible for matching the left and right
images in this case. In this article, we show how a simple modification to model neurons
that compute correlation can account for depth perception in these stimuli. Our model
cells mimic the response properties of real cells in the primate brain, and importantly, we
show that a perceptual decision model that uses these cells as its basic elements can capture
the performance of human observers on a series of visual tasks. That is, our computer
model of a brain area, based on experimental data about real neurons and using only a single
type of depth computation, successfully explains and predicts human depth judgments
in novel stimuli. This reconciles the properties of human depth perception with the properties
of neurons in V1, bringing us closer to understanding how neuronal activity causes
perception.
Quantal analysis reveals a functional correlation between pre- and postsynaptic efficacy from excitatory connections in rat neocortex by Hardingham N, Read JCA, Trevelyan A, Nelson C, Jack JJB, Bannister N, HardinghamEA10.pdf (1.7 MiB)
Latitude and longitude vertical disparities by Read JCA, Phillipson GP, Glennerster A , ReadPhillipsonGlennerster09.pdf (5.2 MiB) - At around this time I'd been spending a lot of time thinking about vertical disparity, and had been awarded an MRC grant to study it. To begin with, I wasn't even entirely clear what vertical disparity was, and I had difficulty following some of the other papers on it. I realised that a lot of the confusion was occurring because there are actually several different definitions of "vertical disparity" in the literature -- I've identified at least four -- and to make matters worse, different papers aren't always clear about exactly which definition they have in mind. Unsurprisingly, this has caused a lot of confusion about what the properties of vertical disparity actually are. Part of the problem, I think, is that under some circumstances you obtain the same results regardless of whether you define the elevation coordinate as a latitude or a longitude on the retina, and this may have given the impression that it doesn't ever matter -- whereas in fact, under some circumstances, the two definitions give completely different results. So with my PhD student Graeme Phillipson and my old friend and colleague from back in Oxford, Andrew Glennerster, we decided to write a paper really getting into the nitty-gritty of vertical disparity, and laying out clearly what properties follow from different definitions. It may not be the most exciting paper ever, and like many of my papers, it has masses of Appendices filled with equations. But we hoped it would be a useful reference for anyone interested in vertical disparity -- and I did at least try hard to make the pictures pretty.
Extracellular calcium regulates postsynaptic efficacy through group 1 metabotropic glutamate receptors. by Hardingham NR, Bannister NJ, Read JCA, Fox KD, Hardingham GE, Jack JJB , HardinghamEA06.pdf (0.7 MiB) - This project began when I was in my first neuroscience post-doc, doing a Wellcome Training Fellowship in Mathematical Biology with Julian Jack in Oxford. Julian's lab had done a lot of work on synaptic physiology, in particular developing quantal analysis as a tool to examine central synapses. The physiology underlying quantal analysis is the fact that neurons are generally connected by more than one terminal. When the presynaptic neuron fires an action potential, packets - quanta - of neurotransmitter may be released from all, some or none of these terminals. If each packet of neurotransmitter contributes a similar amount to the postsynaptic depolarisation, then a histogram of the effect produced by each presynaptic action potential will have several peaks, corresponding to the release of 0, 1, 2 ... quanta of neurotransmitter. In principle, this histogram can then be analysed to estimate the effect caused by each quantum, and the probability that a quantum will be released from a terminal given an action potential. In practice, this depends critically on things like whether each quantum really does have a very similar postsynaptic effect, whether the release probability is the same at all terminals, whether these quantities are constant over time and so on. Julian's lab had already developed a lot of sophisticated tools for quantal analysis, and I took this further, developing a still more elaborate fitting algorithm to extract the quantal parameters, and also a battery of statistical tests to decide whether the resulting model of the synapse was adequate. There's quite a lot of sceptism as to how far quantal analysis can be trusted in the central nervous system (as opposed to at the neuromuscular junction, where it was originally developed), so these tests were critical in convincing people that our results were reliable. Neil Hardingham, the first author, who was a Ph.D. student and post-doc in Julian's lab when I was there, used these techniques to examine how the quantal parameters change as a function of extracellular calcium. He was able to show that calcium depletion, as well as reducing release probability, also reduces quantal size. Since calcium levels drop as neurons become active, this represents a novel mechanism for regulating information transfer between neurons.
All Pulfrich-like illusions can be explained without joint encoding of motion and disparity. by Read JCA, Cumming BG , ReadCumming05c.pdf (2.8 MiB) - The final step was to build a neuronal model, and show that it experienced the illusion. We modelled a neuronal population constructed of neurons which either encoded motion, or depth (not both), and showed that a very simple way of "reading out" this activity, so as to convert it to a perception of depth, would be subject to the Pulfrich illusion. We also examined other evidence which had been put forward in support of the joint motion/depth idea, such as the illusion of swirling motion which occurs in dynamic noise with an interocular delay. We found that this, too, could be experienced by a brain which encoded motion and depth entirely separately. So, while there certainly are primate neurons which jointly encode motion and depth (notably in MT), there is no reason to suppose that these play a privileged role in supporting the Pulfrich effect and related illusions.
This series of three papers (Read & Cumming 2005abc) has recently attracted some criticism from Ning Qian and Ralph Freeman, in a paper entitled "Pulfrich phenomena are coded effectively by a joint motion-disparity process" (J Vis, 9(5): 1-16). My take on it is that we are all basically in agreement, but the situation is obscured by the lack of a clear agreed definition of "joint" vs "separate" encoding of motion and disparity. For example, we said that to be called a motion detector, a cell not only had to be tuned to speed, it also had to respond differently to opposite directions of motion, whereas Qian & Freeman required only speed tuning. I want to clear up one other point. Qian & Freeman say that our model is "non-causal", apparently because it responds to the disparity between a stimulus in one eye and a stimulus which arrives in the other eye at a later time. At the time that stimulus 1 occurs, stimulus 2 is still in the future. However, at the time the neuron responds to the disparity between the two stimuli, both stimuli have already occurred. Thus, the model is firmly causal. Indeed, our derivation of its properties explicitly sets the temporal kernel to zero for future times.