On Thursday, June 13th, Ed Baggs (blog, twitter, scholar) will deliver a talk that should be of broad interest to cognitive science students, psychologists, and beyond. Ed provides a thought-provoking perspective on ecological psychology, based on an analysis of the relation of an organism to its surround. The talk will take place at 3 p.m. in B2.24, the boardroom on the second floor of the Computer Science Building.
Title: The limits of explanation in embodied cognitive science
Embodied cognitive science seems to offer a radically different type of explanation from that found in computational cognitive science. It promises to explain particular behaviours in terms of perceptual or mechanical coupling between an actor and some structure in the actor’s surroundings. Perceiving and acting are held to be a matter of ‘maximizing or minimizing something visual [or perceptual], or centering or symmetricalizing something, not merely a matter of reacting to a stimulus’ (Gibson 1972, unpublished note quoted in Reed 1988, p. 77). The resulting empirical programme has proved successful and productive. But the methodology appears only to be suitable for the study of behaviours where there is a clearly definable goal that can be described in terms of an optimal solution. Balancing on a beam or swinging for a home run may be describable in such terms; constructing a sentence or a termite mound, maybe less so. For this reason, it is sometimes argued that embodied explanations are inherently limited and cannot “scale up”. I propose that this scaling-up metaphor is misleading. It implies that a single type of explanation should exhaustively capture all types of behaviour. In fact, the optimization mode of explanation does not even capture what is perhaps the most fundamental form of behaviour, namely exploration. I further suggest that “distinctively human” forms of behaviour arise through a dialectical tension between optimization and exploration, and such behaviours require a qualitatively different type of explanation.
On Wednesday, March 20th, 2019, at 4 p.m., J. Scott Jordan of Illinois State University will present a talk of interest to a broad audience. All are welcome. The talk will take place in B1.08, on the first floor of the Computer Science building.
Title: Wild Narratives: The Science of Consciousness and the Stories We Live in
Abstract: The models we have of what we are and what we live in necessarily contextualize and constrain our models of science and consciousness. At one time, certain groups of humans conceptualized themselves as eternal spirits living in a transient material world. In contemporary models, we often describe ourselves as informational minds living in a physical world, or as physical minds situated in a physical world. Whatever the model, the present talk will propose that all such models constitute wild narratives. They are narratives because they are necessarily representations of (i.e., they are about) what we are and what we live in, and they are wild because they emerge ontogenetically, socially, culturally, and phylogenetically out of lived life. I propose such narratives have their roots in unconscious anticipations that allow us to distinguish ourselves from the world, including the actions, perceptions, and cognitions of others. Research indicates these anticipations emerge from a similar cortico-cerebellar architecture that results in all cortical activity being inherently anticipatory because it is continuously, recursively primed by memory-laden cortico-cerebellar networks (Koziol & Lutz, 2013; Schmahmann, 2001). As a result, the past is continually fed forward into the present as anticipation about the future in action, perception, and cognition, simultaneously. In short, we necessarily live within multiple levels of wild narrative, simultaneously. The talk will conclude with a review a number of contemporary cultural, artistic narratives that address these issue directly. These include W. G. Sebald’s,The Rings of Saturn, Hayao Miyazaki’s, Mononoke-Hime, and the HBO series, Deadwood.
On Monday, December 3rd, at 16:00, Dr Marek McGann of Mary Immaculate College, Limerick will present a talk of broad relevance to the cognitive science and psychology cohort. All are welcome. The talk will take place in B1.08, on the first floor of the Computer Science building.
Title: The Scope of Psychology: Addressing the relationship between psychological phenomena of different temporal and physical scales.
Abstract: Mind-relevant phenomena occur at a number of different temporal and physical scales – from perception of events of milliseconds duration, to hours long rambling conversations, to prolonged collaborative endeavours of a community. Despite long decades of research and inter-disciplinary communication, however, we have few resources for systematically addressing the various relationships between these different phenomena and the range of scales involved. In this talk, I will argue that the failure to examine these considerations in some detail is likely responsible for problematic oversights, and incoherences, within and between various disciplines of cognitive science, using my own field of psychology as an example – particularly the somewhat fraught relationship that psychological science has with its use of averages to describe and explain individuals. Using the Skilled Intentional Framework developed by Erik Rietveld and colleagues, which integrates aspects of enactive and ecological approaches to cognitive science, I will outline one way in which we might conceptualise the relationships in question, and explore the structure of psychological or cognitive phenomena at multiple scales of activity.
On Tuesday November 13th, Professor Linda Smith of Indiana University will give a talk on the topic of infant learning and development. The talk will be at 13:00 in Room B1.09 of the Computer Science Building. Linda runs the Cognitive Development lab in Bloomington Indiana.
Title: Learning from the infant’s point of view
How do infants learn their first words in a noisy environment? How do they progress from being slow incremental learners to rapid learners who appropriately generalize categories and concepts from minimal experience? How might the answers to these questions make for smarter, more nuanced, machine learning? We have used head cameras to collect egocentric views (and parent talk) in the home from the perspective of infants and toddlers (1 month olds to 30 month olds, with no experimenters present, 1000 hours of head camera video) and in a naturalistic toy room environment in the laboratory (about 200 hours of head-mounted eye tracking yielding both the ego-centric view and the gaze within that view). Our analyses indicate four principles we believe to be key to human prowess in visual category recognition: (1) Learn a massive amount about very few individual entities (and little bit about lots of other individual things); (2) Learn a massive amount about a very few categories (and a little bit about lots of other categories); (3) Learn about small selective sets at different points in time; (4) Self-generate the data for learning (with some help from mom and dad). The implications for both human cognition and machine learning will be discussed.