Wednesday, February 21, 2007

Perception

When we perceive, we use our previous knowledge to gather and interpret stimuli registered by our senses. Typically, we gather stimuli by seeing (visual) or hearing (auditory). Visual object recognition allows us to identify a complex arrangement of sensory stimuli. Gestalt psychology is a historical approach to psychology. One main principle of Gestalt is that humans have the tendency to organize what they see into pattterns rather than random arrangements. Look at the picture below. What do you see?



You probably see a human face rather than simply an oval and two straight lines. This figure has Gestalt, or an overall quality that transcends the individual elements.

More modern theories of visual object recognition include template-matching theory, feature-analysis theory, and recognition-by-components theory.

With template-matching theory, you compare a stimulus with a set of templates, or patterns that you have stored in memory. An example is the variability of letter shapes. We can easily read the words below, even though they are in different fonts.



Feature-analysis theory proposes that a visual stimulus is composed of a small number of characteristics, each being a distinctive feature. For example, a table has four legs and a flat surface.



Recognition-by-components assumes that a given view of an object can be represented as an arrangement of simple 3-D shapes called geons. For example, a mug consists of a straight upright cylinder and another curved cylinder for a handle.



The above types of visual object recognition are all examples of bottom-up processing, where the physical stimuli from the environment are registered on the sensory receptors. Another type of processing is top-down processing, which emphasizes how a person’s concepts and higher-level mental processes help in identifying objects. Top-down processing particularly influences the ability to recognize words during reading. See how easy it is to read the paragraph below, even though only the first and last characters of each word are positioned correctly.

Aoccdrnig to rseearch at Cmabrigde Uinervtisy, it deosn’t mttaer in what oredr the ltteers in a wrod are, the olny iprmoatnt thing is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a total mses and you can sitll raed it wouthit porbelm. This is bcuseae the human mind deos not raed ervey lteter by istlef, but the wrod as a wlohe.

We use context and previous knowledge to make meaning of these strings of characters in between. So you see, both bottom-up and top-down processes must work together to allow us to recognize objects such as words in print.

Recent research has begun to pinpoint the areas of the brain associated with bottom-up and top-down processing. One such study done at Ohio State University (Sarter, Givens, & Bruno, 2000) measured the brain’s activation during sustained attention (top-down), or a person’s ability to detect rarely and unpredictably occurring signals occurring over a period of time, versus arousal attention (bottom-up). This article combines research from both animal and human studies in cognitive neuroscience into a complete and exact description of the neuronal networks ascribed to sustained attention (essential for successful academic performance). The researchers found evidence to conclude that sustained attention is ascribed to the right fronto-parietal-thalamic network, while arousal is controlled by the thalamic, midbrain, and reticular networks. By pinpointing these regions of the brain, more effective treatment for attentional disorders can be provided. While the article is very scientific, the researchers’ emphasis on studying each sub-process of attention and how they interact helps us to understand and appreciate individual learners.

Speech perception is the complement to visual perception. During speech perception, our auditory system translates sound vibrations into a sequence of sounds that we perceive to be speech. Visual cues contribute to speech perception. The researchers McGurk and McDonald showed participants a video of a person producing simple sounds, such as “gaga”. However, the researchers present different audio information along with the video, such as “baba”. The observer’s responses usually reflected a compromise between these two discrepant sources of information: “dada”. Here is a video clip that illustrates the McGurk effect. Turn up the volume and watch the video. What do you hear? Now, play it again and shut your eyes, what do you hear? You can even open and close your eyes while watching the video, the effect is still the same.

Recent research (Vroomen & Gelder, 2000) underscores the direct correlation between auditory and visual perception. The researchers concluded that it is eaiser to detect a visual stimulus when it is accompanied by an abrupt tone. This so-called "freezing" phenomenon in the visual modality was closely related to the organization of the sound in the auditory modality. This begs the question, can we utilize information from one sensory modality to organize our perceptions in the other modality? Read more about the study by clicking here.

With the “no child left behind” policy, test scores are foremost in educator’s minds. One study at Indiana University looked at the impact of visual and auditory cognition, measured through standard clinical tests, on predicting test scores (Watson, Kidd, Connell, Eddins, Gospel, Watson, Horner, Lowther, Rainey, & Kruger, 2003). The researchers found that the reading-related skills factor was the strongest predictor of reading performance and other areas of academic achievement. The second strongest predictor of reading and math achievement was the visual cognition factor followed by the verbal cognition factor. The weakest predictor of academic achievement was the speech processing factor.

By directing lessons to different learning modalities, teachers can utilize the codependency between visual and auditory perception. The use of the Internet as a teaching tool, however, raises another question: how can visual/verbal information presented via the WWW increase or restrict understanding? The Department of Education did a study in 1996 when the WWW was first gaining inroads into the classroom (El-Tigi, 1997). The purpose of the study was to examine student’s perceptions of the effectiveness of visuals in conveying the instructional message. The conclusion was that an educational web site's visual design greatly affects how people understand and use the information. Therefore, the web site should be designed using a learner-centered approach which takes into account the audience's age, learning preferences, and culture, among other factors. Click here to read more about the DOE study.

2 comments:

Ed Psy Topics said...

Please see my comments to the posting above this one.
Your work is good however you must polish the style. Please always give reference in the text.

See here in your text: "Recent research (reference here) has begun to pinpoint the areas of the brain associated with bottom-up and top-down processing. One such study done at Ohio State University (reference here)"

Remember to use scholar articles.

Ed Psy Topics said...

Your work is good. Please give reference correctly in the text posted. Also link the disparate little texts to give a flow to your posts.

Just a little more work on it. You are on the good path :-)