Introduction

Harold Hay Research Grant

In 2013, the proposal titled titled Vision Science for Dynamic Architecture by Sergei Gepshtein, Alex McDowell and Greg Lynn won the inagural award from the Harold Hay Grant Program established by the Academy of Neuroscience for Architecture (ANFA) in 2012:

The mission of the Academy is “to promote and advance knowledge that links neuroscience research to a growing understanding of human responses to the built environment.”

Accordingly, the Harold Hay Grant Research Program has the goal to encourage cross-disciplinary research in neuroscience and architecture “informing building design” and “incorporating principles derived from neuroscience research.”

Vision Science for Dynamic Architecture

Quoting the proposal and subsequent documents:

The relationship between the person and the built environment is dynamic. This dynamism unfolds over many spatial and temporal scales. Consider the varying viewing distances and angles of observation, and also the built environments that contain moving parts and moving pictures.

The architect wants to predict human responses for the full range of these possibilities: a daunting task. We study how this challenge can be reduced using the systematic understanding of perception by sensory neuroscience.

Our starting point is the basic fact that human vision is selective. It is exceedingly sensitive to some forms of spatial and temporal information but is blind to others. A comprehensive map of this selectivity has been worked out in the tightly controlled laboratory studies of visual perception, where the subject responds to stimuli on a flat screen at a fixed viewing distance. We translate this map from the restricted laboratory conditions to the scale of large built environments.

Using a pair of industrial robots carrying a projector and a large screen, we create the conditions for probing the limits of visual perception on the scale relevant to architectural design. The large dynamic images propelled through space allow us to trace boundaries of the solid regions in which different kinds of visual information could or could not be accessed.

For this initial study, we concentrate on several paradoxical cases, such as the diminished ability to pick visual information as its source approaches the observer, and the abrupt change in visibility following only a slight change in the viewing distance.

Our overarching goal is to create a versatile measurement platform for mapping the spatial and temporal boundaries of perception in large spaces, for the forthcoming case studies in architectural and urban design, and for experiments in virtual architecture, mixed reality, and other immersive media.

Against the backdrop of a long and venerable history of “rationalization of sight,” from the early drawing systems to the invention of perspective and moving pictures, our study makes a case for the transition from research of visual representations of space to research of the experience of visual space that contains representations.