By Brandie Jefferson - Washington University in St. Louis
Research shows that recent interactions with their environment can guide where people look.
For example, do you waste time in the morning looking for your keys? You might have more luck finding them quickly by writing the word “KEYS” on a light switch you use every morning, the research indicates.
Our visual world is cluttered, complex, and confusing. “We can’t fully process everything in a scene, so we have to pick and choose the parts of the scene we want to process more fully,” says Richard Abrams, professor of psychological and brain sciences at Washington University in St. Louis. “That is what we call ‘attention.'”
So then, how do we choose which parts of a scene deserve our attention?
“We’re more likely to direct our attention to things that match objects that we’ve interacted with,” Abrams says. The new research shows that this is the case even if they only match in meaning—not appearance, he explains.
Previous research has shown that our attention is biased toward objects that share basic features—such as color—with something we have recently seen.
For example, finding your keys on a red key chain would be easier if you had previously reached for a red apple, compared to if you had chosen a yellow banana for your snack. This effect is called “priming.” This priming also occurs for items that are only conceptually related—the word “KEYS” doesn’t look like your keys, yet the facilitation still occurs.
And if we want to strengthen that bias? Perform an action while being primed with an image or, according to the newest research, with a word, and you’ll find your keys even faster.
“Things we act on are, by definition, ‘important’ because we’ve chosen to make an action,” Abrams says. Making an action may produce a signal in the brain that what you’re seeing is more important than if you just observed it, passively.
In the study, which appears in the journal Psychonomic Bulletin and Review, participants performed a pair of ostensibly unrelated tasks.
Seated at a computer, the screen first flashed either the word “Go” or “No.” If the screen flashed “Go,” the study participant was to press a button when the priming word (e.g., “KEYS”) appeared. If they initially saw “No,” however, they were told just to look at the word.
Next, participants searched a “scene” on the screen that contained two pictures. They were told to find a left or right arrow and indicate which was present (while ignoring up-or-down arrows). The arrows were superimposed on the pictures (but the content of the pictures was unimportant for the task). Importantly, one picture always represented the priming word (a picture of keys). The other picture was of a different, unrelated object.
Although the pictures were unrelated to the arrow-finding task, on some trials the arrow was on the prime-matching picture (keys), whereas on the other trials, the arrow was on the other picture.
Subjects could locate the left or right arrow faster when it was on the picture of keys than when it was on the random picture—as to be expected based on the established principle of priming.
The important novel finding of this research was that participants located the arrow faster still if the priming had involved an action—that earlier button press.
In practice, this priming could have a number of applications. For example, Abrams says, imagine priming baggage screeners with the word “KNIFE.” That obviously wouldn’t allow screeners to see something that was invisible, but, he says, “That might draw their attention to a knife in their visual field,” something that may have otherwise gone undetected in the visual clutter of a disorganized suitcase.
“In order for us to behave efficiently in the world, we have to make good choices about which of the many objects in a scene we are going to be processing,” Abrams says. “These experiments reveal one mechanism that helps us make those choices.”
Source: Washington University in St. Louis via Futurity - Original Study DOI: 10.3758/s13423-017-1415-4
If you enjoy our selection of content please consider following Universal-Sci on social media: