Is it a bird? Is it a plane? No! It's a key discovery about how human memory is related to motion
[iframe style="border:none" src="//html5-player.libsyn.com/embed/episode/id/5216909/height/100/width/480/thumbnail/no/render-playlist/no/theme/standard-mini/tdest_id/448900" height="100" width="480" scrolling="no" allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen]
In this episode, I talk with Mark Schurgin, Graduate Fellow based in the Visual Thinking Lab at Johns Hopkins University in Baltimore, USA. We talk about Mark's work combining his experience and knowledge of vision research memory, investigating how basic knowledge that we have about how the world works - our 'core knowledge' supports our memory about objects. We talk more about how Mark discovered this, and implications for processes such as machine learning for autonomous self-driving vehicles, devices such as Alexa or Siri, facial recognition software.
You can try an example of the experiment here: https://www.youtube.com/watch?v=yev94H_Nabg
Here is the link to the abstract for the paper we talk about in this week's show:
Here is the actual abstract for some context:
Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition.
If you do enjoy this episode, and would like to support the show, you can do that in a few ways:
You can follow the show on twitter @wcwtp, and find the website at www.whocareswhatsthepoint.com
You can also email the show at email@example.com
If you could please leave a review and rating on iTunes - that really helps others to find the show.
Or on Stitcher too: http://www.stitcher.com/podcast/who-cares-whats-the-point?refid=stpr