3D Memory Priors Reflect Communicative Efficiency not Statistical Frequency

Abstract

An essential function of the human visual system is to encode sensory percepts of complex 3-dimensional visual objects into memory. Due to limited perceptual resources, the visual system forms internal representations by combining sensory information with strong perceptual priors in order to optimize a trade-off between accuracy and efficiency during retrieval. We reveal detailed priors in memory for rotations of common everyday objects using data from 1150 respondents over Amazon Mechanical Turk (AMT) engaging in a serial reproduction task, where the response of one participant becomes the stimulus for the next. Successive reconstructions in the serial reproduction of 3D views of common objects reveal systematic errors that converge to stable estimates of the perceptual landmarks that bias memory. By sampling uniformly and densely over all rotations in SO(3) we reveal perceptual landmarks in memory that eluded past experimental approaches. The data challenge explanations based on statistical learning (Frequency hypothesis). Instead, we show that the memory data reflect the entropy of word-based semantic descriptors of the view images, and propose that memory priors reflect communicative need rather than natural image statistics. Finally, optimizing the Information Bottleneck (IB) trade-off between the complexity and accuracy of object view reconstructions using a communication model in which views are represented as distributions over a semantic space determined entirely by word-based associations produce biases that correlate with biases in memory.

Publication
(In preparation)