Thursday, April 19, 2007


So, now that my Div3 is completed, and I just submitted my official penultimate draft, I thought I'd discuss some of the finer points of what I've learned in this journey.

1. Though it stands to reason, my particular study did not find a link between WM and N400 amplitude. There was a trend in adults (p=.063), and the number got lower with cleaner subjects. There was NO correlation in kids with this sample size. I think the lack in statistical power comes from a small sample size, and with data as varied as mine, this would be all it takes to gain significance.

2. Adult data in my study ended up, as expected, being sensitive to the gradient of cloze expectancies. That is, moderately incongruous sentences had shallower N4's than strongly incongruous. Children did not show this effect. Here's my reasoning (right from my discussion page) as to why:

Chall’s (1983) final stage in reading development is being able to grasp metaphor and ambiguous language. This is able to be achieved after years of exposure to different forms, meanings, and uses of words. Additionally, Vygotskii (1965) found that children have more difficulty traits of an object. Just like a cow must have horns to be a cow (even if a dog is temporarily relabeled as a “cow” in the context of a game).

In the current study, the following sentence was rated as moderately incongruous: “She made a jar out of pumpkins.” An adult can more efficiently go through the immediate nodes that would make the sentence false, such as being large, rigid, a plant, and attain the nodes that make the sentence possible: that it’s hollow, they can be small. Accuracy data suggests that children and adults end up perceiving the sentences with the same plausibility. However, increased efficiency in adults is illustrated in significantly longer RT in children for the moderate condition. Working memory, reading ability, and efficiency are likely key to developing this abstraction (Gathercole, 1993), further studies will be needed to better understand the role of these processes.

I think studies like this would help better understand neuropsychological evidence about kids' perception of abstraction. At this stage, I would definitely pursue a study linking complex span (Daneman and Carpenter, 1980) to N400 amplitude. I think with the right ERP task (homograph decoding, Proactive Interference/Release from Proactive Interference), it's totally possible. I would also be interested in if PI/RPI are related to a complex span; this would give insight into executive function and working memory. There are probably such studies out there, I just haven't had the chance to look for them.

Further, I would be interested to see other forms of N4 tasks involving kids/adolescents. Language is the key to human consciousness, so why aren't we looking at our own intellectual roots? I mean, I'm sure we are, but I'd like to know more.

I don't know if I'll be updating this, but I have no particular reason to take it down. When I get back to my library carrel, I'll see if there are any other studies or books of note that should be posted.

But if I don't post again to this particular blog, I will link to whatever project I have next.

Until then, I leave you with the two quotes (which work rather nicely with eachother) found on the front cover of my Div3. They perfectly sum up the importance of the study psycholinguistics:

Thought and language, which reflect reality in a way different from that of perception, are the key to the nature of human consciousness. Words play a central part not only in the development of thought but in the historical growth of consciousness as a whole. A word is a microcosm of human consciousness.” – L.S. Vygotskii (1965)

"I just look at the words and I know what they say.”-Laura, age 6, from Siegel (1993)

Wednesday, February 28, 2007

be patient when you read, you'll catch all our flaws...

Humans all over are an impatient species. We jump ahead with everything.

In my research of semantic priming, wherein the reader expects a certain ending to a sentence, I found a few articles that indicate that this happens at word-level, too.

For example:
Semantic priming (high cloze probability sentence):
Johnny ate a peanut butter and jelly ________. I'm sure near 100% of you would fill in "sandwich."

Word priming:


Now, what goes in to complete that word? The blank in an experiment discussed in Rumelheart and McCleland (1981) had an obscured letter at the end that had a horizontal line and two slightly crooked appendages: most people perceived this as a "k," and the second-most popular choice was of a "d." The clincher? The ending was actually an obscured "R." The word was a nonsense word, and the task was to say fill in the blank letter, not to pronounce the word. So subjects were never primed to think that was an actual word. Go back and look at it. I'm sure your mind, for a split, split second, thinks "work," or "word"

According to Rummelheart and McClelland (1981), this word is out of context. But, what of forced context? If I wrote the letter "B" with some blanks, your mind immediate jumps to all possible words it knows that starts with "B" if "I" were the next word, "BI__" you might start going to BIND, BIKE, BILL, etc. But what if I wrote "BINT." The mind is forced to read BINT and then search around in its working memory for what that word might mean. That is, it reads the sentence, and if no real meaning is found, it moves on. The activation for B is limited, but all word that have "_INT" aren't (HINT, LINT, DINT). That is, even though the first letter is more dominant, usually, the brain gets stuck on INT.

So, if this happens on word-level, it must happen on sentence level.
"We raised pigs and cows on the family ____" farm is what you'd think. But the N400 is triggered stronger if I ended it with "dirt" and strongest with "apple."

So I leave you with this quote:
"'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe;
All mimsy were the borogoves,
And the mome raths outgrabe."

-Lewis Carrol
"...dealing with nonsense is what reading [is] fundamentally all about."-Robert Glushko

Rumelhart, DE., and McClelland, JL., (1981) Interactive processing though spreading activation in Lesglold AM., Perfetti, CA. (eds.) Interactive Processing in Reading. Lawrence Erlbaum and Associates, Hillsdale.

Thursday, February 8, 2007

Reading Development

You'd think that with something so universal as reading, there'd be tons of research into the stages of reading development. At some point, we all go from the ABC song to "To be, or not to be: that is the question: Whether 'tis nobler in the mind to suffer The slings and arrows ofoutrageous fortune, Or to take arms against a sea of troubles..." (Courtesy of

There are only a few books out there that have parsed out stages or reading development. An older one, Chall (1982) is an amalgamation of research that divides reading into 6 stages. Chall argues that reading is Piagetian in nature, in that you can't learn words if you don't know letters. These stages have been backed up in research previous entries in this blog. For instance, the author notes that most people attain adult-level sense of grammar by the age of 7; much like Friederici, et al (2004) came up with in their developmental N4/P6 study. Further, by the age of 10 (the age-range desired for my study), children make the transition from struggling to read to being able to read for learning. That is, by the age of 10, you make the transition from "learning to read" to "reading to learn."

But this is only one argument, and I've found no one that's particularly cited Chall, though the work and research makes perfect sense. Having now immersed myself in that field, I can say that much of what she argues makes sense.

So why is it that there is no universally agreed upon stages of development in reading? I've read several books that discuss various stages, including linking them to Piagetian stages; as with Piaget, you need one stage to get to the next. You need to know the sound "ah" "p" and the rule of silent "E's" to make "apple." I perhaps should have taken some linguistics courses before making linguistics my Div3. But I'm more interested in how people ascertain these concepts, rather than the concepts themselves. There's always grad school. (Citation pending).

But that's all besides the point. There's a lot of bullwark out there regarding reading development. It's either focussed on going from the graphemic representations of letter to phonological sounds, or the transition into comprehension of entire chunks of speech or prose. In sum, people are wayyyyy too focussed.

Which is why it's exciting to read a summation of reading development that is as succinct as De Jong, PF (2006). I picked up the book that this essay is in (Pickering, 2006) not realizing what a breath of fresh air it would be. de Jong, in one paragraph, sums up reading development in a way that has taken other authors I've been reading an entire book to articulate. I'd quote the entire paragraph, but I think there are copyrights involved. But, in sum:
There are a few stages to reading: The beginning reader needs to acquire certain abilities: to know that there is a systematic correspondence between the written and the spoken form of words. To know that there are boundaries to words in written form that there aren't in spoken form (all words blend together when we talk, unless we're adding emphasis).

Why is this so exciting? Well, its succint-ness is definitely part of it. Another reason is for its grand implication in accent and other memes within language. In Dutch, the /a/ in ball, hand, cat are pronounced the same. We would sound pretentious if we, as Americans would pronounce these the same.

This becomes known as "decoding" or "phonological recoding" in which a person moves the orthographic (written form of a word) into the phonography (sound of a word).

More to follow...

Wednesday, January 31, 2007

All this is related

Your brain does a ton of stuff every second of every day. It suppresses or inhibits information it doesn't need, it processes information, it holds onto enough data to fill a library.

When you read, which is my focus, it takes in a sentence and processes it somehow. There are several theories about what happens. As aforementioned, your brain, it seems, goes through its semantics and its syntax. It makes sure that it's all okay, and then moves on to the next one.

The pathway sentences mentioned before hone in on the meaning you're going for. There are several studies (and I'm just getting to these, myself) that track eyemovements with ambiguous sentences and ambiguous subjects. The most notable of these are homographs (words that are spelled and pronounced the same, but have several meanings, such as "bat" "bow" "hearing" "boxer" "race." There are three main theories about how we process these ambiguities, specifically homographs.

Since I'm pressed for time at this very moment, but am really excited to have begun reading these, the dominant theory is that we hold on to all of these meanings until we are given data to suppress "or inhibit" the other meaning. Some homographs have dominant and subordinate meanings. Bow: bow and arrow, bow on a present, or bow on a ship? Bat: baseball or animal? Hearing: they can be hard with lawyers, or they can be hard in general. And so on.

What potentially triggers the N400 is that the brain is then going back and checking which of these homographs should work. It has been shown that people with better working memory are able to come to the conclusions faster. Why? Because they remember more, need to go back less, and therefore have smaller N4's because their brain needs to do less work.

Therefore, it is my hypothesis that people with better working memory will have larger N4's in my study because they will have been holding onto all possible meanings of a sentence, even ambiguous ones, and when they get to the end and it is or isn't what they thought could happen, it will effect them more.

BAM! That's where my Div3's direction is going now. Bet you never saw all that coming.

Now that I've described to you, my captive audience, what my Div3 is and its background, I can probably get more specific for you; tell you about other specific studies, or how mine is going. We'll see as time progresses and my commitments take care of themselves.

Kumar, N., Debuille., B. (2004) Semantics and N400: insights for schizophrenia Journal of Psychiatry and Neuroscience 29(2) 89-98

Fiebach, CJ. Vos, SH. Friderici, AD (2004) Neural Correlates of Syntactic Ambiguity in Sentence Comprehension for Low and High Span Readers Journal of Cognitive Neuroscience 16(9) 1562-1575

Gunter, TC., Jackson, JL., Mulder., G (1995) Language, memory and aging: an electrophysiological exploration of the N400 during reading of memory-demanding sentences Psychophysiology 32, 215-229

Monday, January 22, 2007

Does a canary breathe?

I like to start out with questions. The question above might have taken you a second to figure out. In the 70's-mid 80's, it was very popular to study language associations. There are so many journals I cite that ended in 86-89. Most of which contain foundational articles and amazing linguistic theory. Comsky, Daneman and Carpenter, Meyer and Schanvedlt (sp?) are all buried in dusty microfilm, books on the second floor of the library where only a select few of us go, and in works-cited page after works cited page.

The 80's brought on the advent of working memory studies. most of what I read that was foundational for working memory was done then. I guess popular topics come and go.

The latest thing in cognitive neuroscience is aggression, attention, and gendered behavior (or the lack thereof). Look at the launch dates of journals, and their end dates, and you can see when society wanted to better itself and how.

70's= what we say. 80's=where those words come from. 90's=why can't our kids sit still? Are we destined to have the NAS as featured in Johnny Mnemonic?

But I digress. Does a canary breathe? A question like that is answered in longer time, usually, than "Does a canary sing?" Collins and Quillian (1969, 1972, 1975) developed a theory of what is known as "semantic memory." There are maps in your head. D. Mook (2004)describes semantic memory as "in contrast with short-term or working-memory, the amount of information that is stored in semantic memory is enormous. It includes the facts that Rome is the capital of Italy, that Madrid is the capital of Spain, that Columbus crossed the ocean in 1492, and that Zebras do not wear overcoats."

Mook describes the theory much better than Collins and Quillian ever do (brilliant researchers, crappy writers). He describes semantic memory as a massive library, with each word cross-referenced in several-to-millions of different ways. Wickens (1970) clarifies that nouns most likely to be cross referenced this well and even then, some verbs are adjectives, some verbs are nouns, some nouns are adjectives, (you can be chicken, you can eat a chicken; you go for a run, have a run in your stockings, or run from the police; you can jam your toast, your traffic, or music, or a "jam" can just be a song...etc.).

But, on the whole, it still stands true that most words exist in a phenomenal store. Back to the library motif, "In a library, entries will be classified in a hierarchical system or network, such that books about (say) animals will be in a place reserved for them. Then books about birds will likely be found within that section, and so on. There may be a number of different routes, in other words, by which we can get to the informtion that we need, but the point is that from a given starting place, we only have to search among a limited number of entries that are connecte3d to that starting pkace, rather than plodding through the entire library" (Mook, 2004, p210).

In 1969, Collins and Quillian tested the semantic hierarchy. They even included a nice little graphic which is roughly similar to this: .

Note Canary can sing and is yellw, but to find out if it breathes, your cognitive network has to go up to birds, to animals. If I asked "can a canary fly?" you would have to go up only one level.

The experiment was as follows: the researchers sat college students in front of a computer (sound familiar?) with different questions being presented. Some involving only one step in the above hierarchy (Is a canary a bird?) and more. The more steps it had to take, the longer the response took.

The results? It actually took longer to answer "does a canary breathe" than "does a canary sing?" by about half a second.

*hold onto the seat of your pants, you're about to fall over*

In my experiment, I have college students in front of a computer screen deciding if a sentence makes conceptual sense. 1/3 of these sentences are found not to make sense, 1/3 are found to make sense, and 1/3 could go either way. I'm looking at the N400, a rection to semantics and semantic memory. In theory, reaction time (RT) should be longer for type2 (could go either way) than the other two.

Each sentence leads you down a garden-path (Stanovich, 1979), or high-cloze probability. Such as, "The frog caught a fly with its ____" tongue would be one answer. But if I said "A frog caught a fly with its vacuum," the N4 would be triggered. We already discsussed this in prior entries.

Since the brain takes longer, and the brain reacts stronger, to a sentence that makes no sense, it could be because your brain is following these pathways and not getting what it expected (Stanovich, 1979, Neville, Coffey, Holcomb, 1992). Therefore, it could be argued, that there is neurological evidence for this sort of pathway to exist.

A good follow-up experiment would be to simply replicate these questions and see if the N4 is triggered. In doing this research, I've thought of a thousand more questions to be answered. But until then, I ask you, is a stagecoach a vehicle?

Mook, D (2004) Classic Experiments in Psychology. Greenwood Press, Westport, CT.

Collins, AM., Loftus, EF., (1975) A Spreading Activation Theory of Semantic Processing Psychological Review 82(6) 407-428.

Collins, AM., Quillian., MR., (1972) Experiments on semantic memory and language comprehension, In LW Gregg (Ed) Cognition in learning and memory New York: Wiley 1972

Collins, AM., Quillian, MR., (1969) Retrieval Time from Semantic Memory J. Verbal Learning and Verbal Behavior (8) 240-247

Stanovich, KE, Nathan, RG., West, RF., Vala-Rossi, M., (1985) Children's Word Recognition in context: Spreading Activation, Expectancy, and Modularity. Child Development (56) 1418-1428

Meyer, DE., Schvanveldt., (1971) Facillitation in recognizing pairs of words: evidence of a dependence between retrieval operations. Journal of Experimental Psychology (90) 227-234

Saturday, January 20, 2007

Working Memory Continued

There was a pretty direct correlation in Daneman and Carpenter (1980) between the then-new Reading Span Task and reading comprehension. This was compared to reading samples and scores on the verbal portion of the SAT. What's important about this particular study is that they were the first people to really concretise the notion that 1) people's working memory varied individually. Exactly how (same volume with different command centers, or different volumes same command center, or a combination?) is still being discussed. 2) It birthed the concept of "complex span" (Hitch, 2006). Simple span are basic tasks like: "remember this list of words: parsely, thyme, rosemary, sage" for a verbal span, "remember these numbers: 34, 4, 54" for a math span, and "remember these locations" for a visuo-spatial span. Daneman and Carpenter birthed the idea of processing into these tasks.

There are a variety of people who are studying how these tasks demand and tax memory. There is a significant bleed over from attentional, behavioral, and memory studies to accomplish this task. They look at everything from eye movements to ERP's to fMRI. I'm of the belief that you have a certain capacity for working memory that is divvied up depending on the task. That is why I can do complex long division, but not when I'm also watching TV or driving.

Oh don't get me started on memory and driving...that's a whole field unto itself.

But people our age (though I'm afraid I'm at the tail end of it) are at the peak of their working memory spans. What does that mean per se? Probably that we're more efficient at remembering what we need to remember, so we don't really need a higher-capacity working memory. Hence adults tend to be far worse at playing video games, but far better at using the internet.

So what does this all have to with N400, my first entry? Well that, my dear readers, will have to wait for another entry.

Daneman, M., Carpenter PA., (1980) Individual Differences in Working Memory Journal of Verbal Learning and Verbal Behavior

Hitch, G., (2006), Working Memory in Children, A Cognitive Approach in Bialystock, E., and Craik FIM., (Eds.) Lifespan Cognition. Oxford University Press, New York.

Thursday, January 18, 2007

so wait, what does that have to do with anything?

Working Memory (WM) is hard to track down and manipulate. We fill it up and empty it out so rapidly we don't even know it. We go from map to walking so fast, we calculate out a tip, we figure out what time our laundry will be out of the dryer, we read a paragraph and absorb the message of a book so quickly. So why bother studying it?

Like many things in life, a small thing like WM has prfound implications. It has generally been isolated in the DorsoLateral Pre-Frontal Cortex (DLPFC) with a few other locations in the Temporal Lobe. In people with DLPFC damage, they are unable to remember how they got where they got where they were, or what you just said; their long-term memory, however, remains intact.

In a nutshell, without WM, we can't really function as a human is expected. We can't talk complexly, we can't do basic math. Hypothetically, and generally speaking, of course.

How does it tie into language? Remember (ha ha!) in my last entry, I discussed how there were three components of working memory? They were the phonological loop, the visuo-spatial sketchpad and the central executive. The phonological loop is a cache that takes in all the language data and sends it to the central executive. The central executive then decides what to do next: look up a definition from your 11th grade english class, keep listening for further context, respond with physiological responses in the fight or flight. The phonological loop takes care of the reading, as well as the verbal.

To remember an item, there is a slight corrolary that has been dubbed the "articulatory loop" in which you are remembering an item by rehearsing it, such as that pesky phone number, or address, or username on MySpace. When you rehearse in your head "soccergrrl28, soccergrrl28" until you can type it in, that's the articulatory loop in effect.

These are important for my Div3 in a variety of ways. One half of my Div3 is dedicated towards a Reading Span task which relies on both articulatory loop and the phonological loop. The Reading Span Task (see Daneman and Carpenter, 1980) requires people to read sentences one at a time, and remember the last word of the sentences they were presented. These are presented in 2 to 6 sentence chunks. What it does is make the articulatory loop work, while the phonological loop is trying to work and shove new information in it. Some people are very good at this and they are known as "high span" people. Most college students, I've found, have a high span. I would have preffered a community sample, but what can you do?

This little find, that people have an individual difference in their working memory capacity and ability to manipulate that information, has had a ripple effect in the psycholinguistic field. It has been shown that the verbal WM (as the span task shows) is separate from mathematical working memory and serial recall (Shah and Miyake, 1996). Language, it seems, is a bit more hardwired in the brain than we give ourselves credit. It also used to be thought that everyone had the same capacity (6 digits, I think). But now it is inidividualized.

Why is this part of my Div3? The ability to manipulate language in whatever center of the brain that it's doing it in has several implications. People with higher WM capacities can untangle verbal knots faster, acquire vocabulary faster (but not better), and figure out sentence context better. The aforementioned generally are all related.

So, since what I'm presenting on screen is presented one word at a time, these words need to be held in working memory until they are dumped out. It was hypothesized that children don't have the ability to create sentence context the way adults do, and that is why their N400 is smaller than in adult subjects. But I think that's all for a later post.

At the present, I'm still parsing out what I mean. It makes total sense in my head how this is all related, but I can't even get a bubble-chart out at the moment. When I do, you'll be the third to know.

Citations pending.

Daneman, M., Carpenter PA., (1980) Individual Differences in Working Memory Journal of Verbal Learning and Verbal Behavior

Gathercole, SE., Baddeley, AD (1993) Working Memory and Language. Erlbaum and Associates: Hillsdale.

Shah, P., Miyake A., (1996) The Separability of Working Memory Resources for Spatial Thinking and Language Processing Journal of Experimental Psychology:General 125(1) 4-27

MacDonald, MC., Just, MA., Carpenter, PA (1992) Working Memory Constraints on the Processing of Syntactic Ambiguity Cognitive Psychology Jan(24)1 56-98