research areas


Research Overview


My research interests lie in investigating the building blocks, or features, that underlie perception and exploring how multiple features are combined to determine higher-level cognitive decisions in tasks of perceptual classification, recognition memory, judgment, and identification.  Formal quantitative models are employed to inform and interpret empirical research.

Some current projects are outlined below:



Information Integration


The goal of this research is to uncover the fundamental mechanisms by which a judgment is fashioned from multiple sources of information.  For example, how does a physician formulate a diagnosis from numerous test results?  Of particular theoretical and historical importance is whether information is combined in a way that maximizes some performance measure, such as the probability of a correct diagnosis.  Such questions of optimality permeate the judgment and decision making, categorization, memory, and perception literatures, among others.  Typically, optimal and suboptimal outcomes occur in quite different tasks.  For example, poor combination of information is common in verbal probability problems and good combination is often found in category judgments of colors varying in hue and saturation.  This research attempts to bridge these gaps by utilizing a common set of paradigms.  In particular, this research explores how the nature of the system performing the combination affects the manner in which the information is combined.  For example, is cognitively combined information generally more suboptimal than perceptually combined information?  This research addresses two specific issues.
  1. Many domains of research have investigated the degree to which information is combined optimally.  How might such questions be addressed within a common empirical and theoretical framework?
  2. What task and stimulus factors lead to improved information integration?


Collaborators:  Jerome Busemeyer, Richard Shiffrin



 Feature Induction


This project utilizes a number of new experimental and mathematical techniques to tackle one of the most fundamental questions of psychology: what are the basic building blocks, the features, of perception?  That is, what are the parts of an object that are treated as unitary wholes when recognizing or discriminating objects?  The empirical work is based on the response-classification technique.  On each of numerous trials, an observer's task is to identify a stimulus presented in a noise field.  On many trials, the noise will cause observers to make judgment errors.  To determine which pixels cause the observer to make a particular response, the noise fields (without the actual stimulus!) from trials on which the observer gives a particular response are summed and subtracted from the result of a similar sum for the other choices.  The resulting map is called a classification image and shows which pixel locations the observer used to make a classification judgment.  The classification image, however, is a map of all of the pixels used to make a response and does not discriminate between the subsets of pixels that define a feature.  The next step is to define a generative model of object recognition, e.g., a mathematical formalization of how pixels are grouped into features and then how the features are used to make a response.  The problem now becomes one of recovering the generative model, including the features, from the classification image data.  On the surface, this induction step is an enormously computationally extensive problem.  Recent advances in techniques of data manipulation, however, may make this computation feasible.


Collaborators:  Michael Ross, David Ross, Jason Gold, Richard Shiffrin



Model Selection


A model of a psychological phenomenon is a mathematical formalization of a theory which can produce precise experimental predictions. The main goal of this research is to determine whether averaged group or individual subject model analyses better uncover the “correct” model for an experiment. Because it is well known that averaging can distort the form of data, there has been a trend away from modeling averaged data and towards modeling individual data.  However, there are conditions under which fitting averaged data may recover the correct model more often than fitting individual data.  For example, if there is little data per subject, the noise associated with fitting individual subjects may be worse than the distortion created by averaging across subjects.  Using a simulation technique in which the correct model is known, we are exploring how these analyses interact with various experimental factors and modeling techniques such as number of subjects, number of trials/condition, model selection criterion, parametric variation across subjects, and the choice of competing models.


Collaborators:  Adam Sanborn, Richard Shiffrin



Categorization



Decision Processes


The main goal of this project is to investigate the extent to which judgment and decision-making phenomena manifest themselves in categorization and recognition memory tasks.  In particular, this project explores the role of choice similarity on categorization and recognition performance.  For example, suppose there are two fairly dissimilar widgets, A and B, on the market.  A company is considering producing a new kind of widget, C,  that is similar to A and dissimilar to B.  A common finding is that, under certain constraints, the introduction of C will hurt the market share of A more than B.  In psychological terms, people are less likely to choose A than B after the introduction of C.  This change in preferential choice is called the similarity effect.  Different similarity relations between A, B, and C will produce different shifts in preferential choice.  This project has three branches.
  1. The choice set in a categorization and recognition task can be altered in the same manner as in a choice task.  Do the same shifts of choice occur in categorization and recognition?  Preliminary studies suggest that similar shifts do indeed occur.
  2. No current models of categorization and few recognition models have sophisticated enough decision processes to account for these shifts.  How can these choice shifts be modeled?
  3. This project has potential ramifications for police line-ups where the the addition of a person into the line-up can drastically alter the probability of selecting another line-up member.


Collaborators:  Jerome Busemeyer, Ken Malmberg



Representations


The question of the underlying representation of perceptual categories has been a long standing topic of much debate.  Exemplar theorist suggest that people represent categories by the set of category instances, or exemplars, they have experienced.  Prototype theorists posit that some form of abstraction takes place over these exemplars.  The most common  assumption here is that people represent categories by some sort of central tendency, such as the mean, of the exemplars.  The main goal of this project is to use a new experimental technique, the classification-response technique, to address the issue of category representation.  On each of numerous trials in a classification-response experiment, the observer's task is to categorize a stimulus presented in a noise field.  On many trials, the noise will cause observers to make errors.  To determine which pixels cause the observer to make a particular category response, the noise fields (without the actual stimulus!) from trials on which the observer gives a particular response are summed and subtracted from the result of a similar sum for the other choices.  The resulting map is called a classification image and shows which pixel locations the observer used to make a classification judgment.  It turns out that, for very simple categories, exemplar and prototype processes produce different classification images.  We hope to exploit this difference in empirical work and for more complex category structures.


Collaborators:  Jason Gold, Richard Shiffrin


[back to top]



Dynamics Perception


The purpose of this research is to begin to explore the potential contributions of invariants and exemplars in the perception of dynamic properties as realized in the colliding balls paradigm.  On a typical trial of a colliding balls experiment, two balls roll across a flat surface, collide, and then roll away from each other.  The observer’s task is either to determine which of the two balls is heavier or to make a quantitative estimate of mass ratio.  Recent evidence suggests that observers can become quite adept at determining the relative mass of the two balls and the focus of research has been on what mechanisms observers use to attain this level of performance.  The invariant approach to this task rests on the assumption that people can learn to detect complex visual patterns that reliably specify which ball is heavier. The main tenet of exemplar based models is that people store particular instances of collisions in memory, called exemplars, and that these exemplars are later retrieved to perform the task.  Formal mathematical models of these theories are developed and contrasted in two experiments in which observers are trained in colliding balls tasks.  The evidence so far suggests that people rely on exemplar processing when the task involves relatively few, similar collisions.  Observers switch to invariant processing when there are large numbers of dissimilar collisions.  When confidence in the invariant is low, however, observers revert to the errorful strategies they used before training.


Collaborators:  N/A

 
 
 
Collaborators:  Caren Rotello, Kyle Cave