mathpsych.org

Invited Speakers

Five routes to better models of cognition

Wolf Vanpaemel, KU Leuven

William K. Estes Early Career Award winner 2013.

 

An important goal in cognitive science is to build strong and precise formal models of how people acquire, represent, and process information. I argue that there are several invaluable but under-used ways in which models of cognition can be improved. I present a number of worked examples to show how models of cognition can be enhanced by: relying on (prior) predictions rather than on post(erior pre)dictions; reducing dependence on free parameters by capturing theory in the prior; fighting the Greek letter syndrome by testing selective influence; engaging in model expansion; and taking the plausibility of data into account when testing models. Adopting these modeling practices will require modellers to be creative and to overcome their hypochondriacal fear of subjectivity, but will lead to an increased understanding of cognition.

An important goal in cognitive science is to build strong and precise formal models of how people acquire, represent, and process information. I argue that there are several invaluable but under-used ways in which models of cognition can be improved. I present a number of worked examples to show how models of cognition can be enhanced by: relying on (prior) predictions rather than on post(erior pre)dictions; reducing dependence on free parameters by capturing theory in the prior; fighting the Greek letter syndrome by testing selective influence; engaging in model expansion; and taking the plausibility of data into account when testing models. Adopting these modeling practices will require modellers to be creative and to overcome their hypochondriacal fear of subjectivity, but will lead to an increased understanding of cognition.

 

Autocorrelation and the functional role of the visual cortex

Horace Barlow, University of Cambridge, UK

 

Searching the text-books, journals and monographs on early vision is disappointing if one hopes to form a global picture of what the system has to do to enable us to see. You would find masses of detailed anatomical, neurophysiological, photochemical, psychophysical and optical facts about the visual system, but something is missing, for the computational goal is not obvious.  The approach I advocate here is to start by trying to understand what prevents us seeing: ask what are the types of noise that limit what we see, and what are their origins?  Some will say that understanding what hinders vision cannot possibly tell us much about what is being hindered, but I disagree: noise is a random perturbation of some kind, and if it has a measurable effect on performance, it presumably does this by impairing the perception of regular perturbations of that same kind.  Therefore the first task of the visual system must be to detect and classify the types of regularity that occur frequently in the images the eye receives.

The need to perform this task becomes obvious once one appreciates the enormous redundancy of natural images.  At least 100 different JPEG images can be stored in the disc space required to store a list of the pixel luminances of a single one of them, using the assumption that these pixel luminances are random independent values from the whole luminance range over which the camera operates.  Now this assumption is radically false, and this vitiates any statistical conclusions that might be drawn from the image data if no allowances are made for their highly non-random statistical structure.  The task of detecting and classifying the types of regularity that occur frequently in natural images is therefore crucially important if we are to make reliable decisions based on them.

In this talk I hope to explain how this insight leads to a plausible global picture of early vision into which Hubel and Wiesel's 50 year old discoveries fit like a hand fits a glove.  Perhaps it is even more shocking to discover that William James' statement that the "...sense of sameness is the very keel and backbone of our thinking" anticipated the current claim about the overwhelming importance of auto-correlation by 125 years!

 

 

Reinforcement Learning and Psychology: A Personal Story

Richard S. Sutton, University of Alberta

 

The modern field of reinforcement learning (RL) has a long, intertwined relationship with psychology. Almost all the powerful ideas of RL came originally from psychology, and today they are recognized as having significantly increased our ability to solve difficult engineering problems such as playing backgammon, flying helicopters, and optimal placement of internet advertisements. Psychology should celebrate this and take credit for it! RL has also begun to give something back to the study of natural minds, as RL algorithms are providing insights into classical conditioning, the neuroscience of brain reward systems, and the role of mental replay in thought. I have been working in the field of RL for much of this journey, back and forth between nature and engineering, and have played a role in some of the key steps. In this talk I tell the story as it seemed to happen from my point of view, summarizing it in four things that I think every psychologist should know about RL: 1) that it is a formalization of learning by trial and error, with engineering uses, 2) that it is a formalization of the propagation of reward predictions which closely matches behavioral and neuroscience data, 3) that it is a formalization of thought as learning from replayed experience that again matches data from natural systems, and 4) that there is a beautiful confluence of psychology, neuroscience, and computational theory on common ideas and elegant algorithms.