Imagine you are working on a computer, cutting a steak, painting a picture, or washing dishes. One thing shared by all of these scenarios is that they tend to take place when you are facing a close-scale space. These spaces are typically within 2-4 feet of the body, and consist of spatial layouts filled with objects. Current theories of human visual processing focus on how we perceive objects (3-D spatially bounded entities) and scenes (large-scale indoor or outdoor environments), and relatively little work has tried to apply these theories to explain how we process the small-scale spaces we perform most of our every-day tasks in. My work aims to fill this gap.
At the moment, I call these kinds of spaces Reachspaces. Here are some questions I am currently pursuing.
What visual features characterize reachspaces?
Things that belong to the same category tend to look alike. For scenes, member of a category (such as “forest” or “field”) tend to share global features such as openness, mean depth and navigability with other member of the category. What are the features that are characteristic of reachspaces? We are exploring these questions with both behavioral and computational methods.
What are the neural correlates of reachspace perception?
Since reachspaces have a spatial layout component, but are also filled with objects, we are exploring the extent to which regions that typically process objects vs. scenes are recruited when we view a reachspace. We are also exploring whether there are parts of visual cortex that respond more to reachspaces than to objects and scenes.