home | research
areas |
publications |
teaching |
laboratory |
personal |
research areas | |
Research Overview |
|
My research interests lie in
investigating the building blocks, or features, that underlie
perception and
exploring how multiple features are combined to determine higher-level
cognitive decisions in tasks of perceptual classification, recognition
memory, judgment, and identification. Formal
quantitative models are employed to inform and interpret empirical
research. |
|
[back to top] |
|
Information Integration |
|
The goal of this research is
to uncover the fundamental mechanisms by which a judgment is fashioned
from multiple sources of information. For example, how does a
physician formulate a diagnosis from numerous test results? Of
particular theoretical and historical importance is whether information
is combined in a way that maximizes some performance measure, such as
the probability of a correct diagnosis. Such questions of
optimality permeate the judgment and decision making, categorization,
memory, and perception literatures, among others. Typically,
optimal and suboptimal outcomes occur in quite different tasks.
For example, poor combination of information is common in verbal
probability problems and good combination is often found in category
judgments of colors varying in hue and saturation. This research
attempts to bridge these gaps by utilizing a common set of
paradigms. In particular, this research explores how the nature
of the system performing the combination affects the manner in which
the information is combined. For example, is cognitively combined
information generally more suboptimal than perceptually combined
information? This research addresses two specific issues.
|
|
Collaborators: Jerome Busemeyer, Richard Shiffrin |
|
[back to top] |
|
Feature Induction |
|
This project utilizes a number of new experimental and mathematical techniques to tackle one of the most fundamental questions of psychology: what are the basic building blocks, the features, of perception? That is, what are the parts of an object that are treated as unitary wholes when recognizing or discriminating objects? The empirical work is based on the response-classification technique. On each of numerous trials, an observer's task is to identify a stimulus presented in a noise field. On many trials, the noise will cause observers to make judgment errors. To determine which pixels cause the observer to make a particular response, the noise fields (without the actual stimulus!) from trials on which the observer gives a particular response are summed and subtracted from the result of a similar sum for the other choices. The resulting map is called a classification image and shows which pixel locations the observer used to make a classification judgment. The classification image, however, is a map of all of the pixels used to make a response and does not discriminate between the subsets of pixels that define a feature. The next step is to define a generative model of object recognition, e.g., a mathematical formalization of how pixels are grouped into features and then how the features are used to make a response. The problem now becomes one of recovering the generative model, including the features, from the classification image data. On the surface, this induction step is an enormously computationally extensive problem. Recent advances in techniques of data manipulation, however, may make this computation feasible. | |
Collaborators: Michael Ross, David Ross, Jason Gold, Richard Shiffrin |
|
[back to top] |
|
Model Selection |
|
A model of a psychological phenomenon is a mathematical formalization of a theory which can produce precise experimental predictions. The main goal of this research is to determine whether averaged group or individual subject model analyses better uncover the “correct” model for an experiment. Because it is well known that averaging can distort the form of data, there has been a trend away from modeling averaged data and towards modeling individual data. However, there are conditions under which fitting averaged data may recover the correct model more often than fitting individual data. For example, if there is little data per subject, the noise associated with fitting individual subjects may be worse than the distortion created by averaging across subjects. Using a simulation technique in which the correct model is known, we are exploring how these analyses interact with various experimental factors and modeling techniques such as number of subjects, number of trials/condition, model selection criterion, parametric variation across subjects, and the choice of competing models. | |
Collaborators: Adam Sanborn, Richard Shiffrin |
|
[back to top] |
|
Categorization |
|
Decision Processes |
|
The main goal of this project is to investigate the
extent
to which judgment and decision-making phenomena manifest
themselves in categorization and recognition memory tasks. In
particular, this project explores the role of choice similarity on
categorization and recognition performance. For example, suppose
there are two fairly dissimilar widgets, A and B, on the market.
A company is considering producing a new kind of widget, C, that
is similar to A and dissimilar to B. A common finding is that,
under certain constraints, the introduction of C will hurt the market
share of A more than B. In psychological terms, people are less
likely to choose A than B after the introduction of C. This
change in preferential choice is called the similarity effect.
Different similarity relations between A, B, and C will produce
different shifts in preferential choice. This project has three
branches.
|
|
Collaborators: Jerome Busemeyer, Ken Malmberg |
|
Representations |
|
The question of the underlying representation of perceptual categories has been a long standing topic of much debate. Exemplar theorist suggest that people represent categories by the set of category instances, or exemplars, they have experienced. Prototype theorists posit that some form of abstraction takes place over these exemplars. The most common assumption here is that people represent categories by some sort of central tendency, such as the mean, of the exemplars. The main goal of this project is to use a new experimental technique, the classification-response technique, to address the issue of category representation. On each of numerous trials in a classification-response experiment, the observer's task is to categorize a stimulus presented in a noise field. On many trials, the noise will cause observers to make errors. To determine which pixels cause the observer to make a particular category response, the noise fields (without the actual stimulus!) from trials on which the observer gives a particular response are summed and subtracted from the result of a similar sum for the other choices. The resulting map is called a classification image and shows which pixel locations the observer used to make a classification judgment. It turns out that, for very simple categories, exemplar and prototype processes produce different classification images. We hope to exploit this difference in empirical work and for more complex category structures. | |
Collaborators: Jason Gold, Richard Shiffrin |
|
[back to top] |
|
Dynamics Perception |
|
The
purpose of this research is to begin to explore the potential
contributions of invariants and exemplars in the perception of dynamic
properties as realized in the colliding balls paradigm. On a
typical trial of a colliding balls experiment, two balls roll across a
flat surface, collide, and then roll away from each other. The
observer’s task is either to determine which of the two balls is
heavier or to make a quantitative estimate of mass ratio. Recent
evidence suggests that observers can become quite adept at determining
the relative mass of the two balls and the focus of research has been
on what mechanisms observers use to attain this level of
performance. The invariant approach to this task rests on the
assumption that people can learn to detect complex visual patterns that
reliably specify which ball is heavier. The main tenet of
exemplar based models is that people store particular instances of
collisions in memory, called exemplars, and that these exemplars are
later retrieved to perform the task. Formal mathematical models
of these theories are developed and contrasted in two experiments in
which observers are trained in colliding balls tasks. The
evidence so far suggests that people rely on exemplar processing when
the task involves relatively few, similar collisions. Observers
switch to invariant processing when there are large numbers of
dissimilar collisions. When confidence in the invariant is low,
however, observers revert to the errorful strategies they used before
training. |
|
Collaborators: N/A |
|
[back to top] |
|
|
|
Imagine that two government agencies have separate databases of convicted criminals and their known or suspected associates as well as possible aliases. Different amounts of information may be available in these distinct lists of criminals, so that the two databases may overlap in their content but each will include unique information. Additionally, the databases may contain shared information that is not recognized as such, as when a particular individual is known with a different (non-overlapping) alias in the two systems. How can investigators integrate the knowledge in these databases to decide, say, whether two criminals are associates of one another, or whether they are in fact a single individual? The current research uses psychological experiments to reveal how humans perform tasks like these in order to inform machine-learning endeavors. In particular, this research will explore how people learn to integrate information across relatively complex knowledge structures under different stimulus conditions and tasks constraints. Subjects will be given descriptions of complex knowledge structures, and asked to perform tasks requiring extraction of information and comparison across structures, but they will not be told how to approach the tasks. By tracking and measuring the approaches that they spontaneously adopt, we hope to gain new insight into how the methods of human subjects differ from current computational models for these tasks. |
|
Collaborators: Caren Rotello, Kyle Cave |
|
[back to top] |