Friday, July 29, 2011

Parsing in a Cognitive Modeller

A colleague of mine recently expressed interest in the research I am doing for an honors thesis. "Syntactic parsing in a cognitive modeling system" isn't exactly a crystal clear expression of my work, after all. I embarrassed myself by throwing out a few cursory statements before trailing off and ending with a "it's kind of hard to explain". Why can't I explain my own work? Partly because I don't explain it very often and therefore I'm bad at it. Another reason is that I have met with opposition to it in the past. Fellow linguists are immediately fascinated with the idea of modeling language use inside of a digital brain; computer scientists remain relatively unimpressed. "There are plenty of blazing fast parsers out there, so why build one just to act like a human? Besides, you don't even know if the computer is doing the same thing we are!" I'm going to use this and the next post to organize my thoughts and try to explain how parsing works in a cognitive modeling system, and why we would even try it in the first place.
This post will address cognitive modeling, and the next will move on to parsing.
Lets pretend that some crazy scientist manages to create an unstable time portal, like Will Robinson's, and that before getting to use it himself a rather valuable computer gets sucked through instead, sending it back to sometime in the 1950s. There, a curious electrical engineer finds it, and, realizing the novelty of it, shares his discovery with the scientists of his academic community. Though the origin of the machine is certainly not surmisable, they will try their darndest to figure out how it works.
They go about this by studying two aspects of the machine: the hardware and the software. When they look inside, they observe the mass of wires, chips, resistors, capacitors, and a myriad of other tiny gadgets all somehow integrated into one perfect system. They measure voltage across different points in the circuitry and observe the flow of power from the battery into the other parts. They analyze samples of it to discover what material it is made of, and run a whole bunch of other tests that are not entirely clear to non-engineers.
Observing the software, on the other hand, is much less complicated because all they have to do is turn it on. Let's just say Will Robinson's computer was a futuristic Mac of some sort. Then turning it on greets the scientists with a welcome screen and then Will's desktop:
The scientists are intrigued by the fact that the computer can do all sorts of complicated things, like play music and videos, compress files, typeset documents and manage large spreadsheets seemingly without doing any work. Putting it in perspective, their own computers look like this. They can surmise basically that there is one central system called OS-X that runs everything else, and that each of the functions run in separate programs. They learn that each of the programs run in a window, that programs are made up of more basic functions such as file management and that certain things are impossible, like creating files with "?" in the name.
They also tinker around with the hardware to see how each piece affects the software. Through this they uncover the difference between the RAM and a hard drive, the basic purpose of the CPU, and also that the USB ports transmit information.
After years of studying the Mac, they attempt to create machines which imitate its functions. One person creates a crude screen, another makes some memory with pathetic storage size, and others draft intricate blueprints explaining how the programs function within the machine. They don't all agree on the underlying mechanisms, so they split and pursue different theories. The end product is several schools of research attempting to build the machine by studying different aspects of it. Will they ever make an actual Macintosh themselves? Not likely, with 1950s technology. How can they even tell if they've gotten it right? They can experiment with their own machine and see if it acts somewhat like the Mac.

The scientists study the hardware and software inside of the futuristic Mac.
Now, what does all of this have to do with cognitive modeling?
Wait, what is cognitive modeling? To explain that, we first need a few definitions:
  • Cognition: the excercise of human intelligence, including both deliberation and action, and performance of the wide variety of tasks that humans participate in.
  • Cognitive Science: the study of minds as information processors, including how information is processed, represented, and transformed.
Cognitive science attempt to explain cognition, or human intelligence, by discovering (or theorizing about) the processes and structures underlying it at the lowest level, analyzing the brain as if it were a computer. We study the mind in the same way those scientists studied the Mac: either by observing the actual brain (EEG, fMRI) or observing behavior (eye tracking, reaction time), or in some cases even observing what happens when certain parts break or are damaged by sickness. They've found some interesting things. For example, look at the Wikipedia article to see what what we know about different types of memory.
Cognitive scientists study and imitate the physical and behavioral aspects of the human mind.
Data from these types of experiments contribute to our understanding of the mind, but we still do not completely understand the complex processes that make humans what they are. We can't try our hand at building a human, like the scientists did the Mac, either. Besides technological concerns, there are also ethical ones. Instead, scientists create computer models to simulate human activity. Although there have been many models which simulate single aspects of cognition such as hand-eye coordination or reading, general cognitive frameworks, which model human behavior overall, have also been created.
These computational cognitive models are extremely useful to researchers because they provide a universal testing ground in which mini theories about cognition can be tested. They form what Allen Newell called Unified Theories of Cognition (UTC). The name basically means that if a researcher has a theory of how one cognitive activity works, then it should fit within the larger, unified framework already tried and tested by the scientific community. Once the mini theory is implemented within the larger framework, experiments can be run using the resulting model, which is guaranteed to exhibit the properties of the larger framework. This has the benefit of constraining the variables in one's own model, making it both easier to design and scientifically more sound.
There are several other reasons that these models are useful:
  • You don't have to pay the model to take your experiment, nor do you have to pay a technician to scan its brain while it carries out various tasks. Though initial programming and maintenance cost time and money, these models will carry out an infinite number of tasks for free. 
  • Because we can step through the execution of a program, we can see exactly what the model is "thinking" at all times (akin to "think aloud protocol"), allowing true introspection into the nature of the model and the theoretical consequences of its acceptance.
  • Even if a model is worked out meticulously by hand, humans are error prone. Running the program on a computer guarantees accurate evaluation of the model.
  • Models shared by the community provide baseline results, making work from different researchers comparable and making it easier to measure the advancement of the field.
  • Models can be shared with other researchers easily via the internet.
There are several such available frameworks, including Soar, Allen Newell's creation, and ActR, which is more popular and seems to draw more government funding.
Like the 1950s scientists, researchers in cognition have split into different schools which study different aspects of the mind. The main split is between symbolic and subsymbolic models.

  • Symbolic models focus on the abstract symbol-processing capabilities of humans; we can combine physical patterns into structures and can also produce new expressions by manipulating other structures, e.g. art, language, music.
  • Subsymbolic models focus on the neural properties of the brain; the most widely known is connectionism, which models complex behavior through connections between simple nodes.
And research thrives and will continue to thrive for a long time, advancing artificial intelligence, medicine, economics, and other sciences related to human behavior. Have we made a working human? No. Do we think we understand them? Yes, but not perfectly. Cognitive models are based on years and years of research and empirical backing. It's safe to say that human behavior can be modeled and even predicted at some level.
Here are some of the modeling projects that use a general cognitive framework:

A bunch using the COGENT framework; medical diagnoses, mental rotation, towers of Hanoi.
Learning to Play Mario using SOAR.
Pilot modeling using SOAR. Modeling pilot behavior for better mission planning and design.
Simulated students using ACT-R. The authors evaluate different instructional methods on simulated students.
Eagle Eye. Oops! That's not real. Maybe some day.

Language is an extremely complex phenomenon, but it too must be simulated in some way if we are to confirm the validity of our models. More on that next time.

Sunday, July 24, 2011

How to say "p.s. you suck" in Japanese

This post will focus on two very interesting vocabulary items in Japanese; I call them "p.s. you suck" suffixes, though I'm sure there is a better word to describe them.
Adding a め to a noun insults the noun it is suffixed to:

  • 馬鹿め!["baka-me", idiot!]
  • この小娘め!["kono komusume-me!", stupid girl!]
  • 太郎め! ["Tarou-me!", dang Tarou!]

The verb ["yagaru"] attaches to the base II or connective form of verbs and adds the same feeling as the め suffix:

  • 覚えてやがれ!["oboeteyagare!", remember this you fool!]
  • 罠にかかりやがって... ["wana ni kakariyagatte...", getting caught in a trap, you're so stupid.]
  • 何を言いやがるんだ! ["nani o iiyagarunda!", what the heck are you saying?!]

Both of these forms are used when yelling at or cursing some entity.
I personally don't use them a lot, but they are very common in anime, probably because of the dramatic nature. Here's an example for やがる. It's the last word the villain says (besides きら, a sparkly sound akin to "bwing" or "bling"). Note that the subtitle should be "Remember this!", not "I'll remember this!".

Thursday, July 21, 2011

Japanese Stackexchange

Stackoverflow is one of those sites born of an idea so golden you can hardly believe nobody has ever done it before. Using a simple system of reputation, priveleges and badges, users are encouraged to post answers to on-topic questions for the benefit of the asker and the internet community at large. I go there whenever I need help with a random programming problem. Questions are usually answered quickly because people want the points, and I don't have to feel bad about bothering anyone because asking a question gives someone the opportunity to build their reputation. Started in 2008, the site was immediately a phenomenal success and is still alive and well today. Though stackoverflow is focused on computer programming, the launch of stackexchange allowed the growth of other sites on a variety of content. area 51is sort of a sandbox for people to propose new sites and build communities to nourish and govern those sites.
Recently I was excited to find that a Japanese stackexchange beta site was ready to open. On opening day I was hooked! I answered a large portion of the questions that were answered that day. In fact, I was so hooked that I had to stop posting... I think that part of the success of stack exchange is that the design creates motivation to answer questions in a manner similar to addicting online games; reputation points aren't worth anything in real life, but somehow racking 'em up is such a satisfying experience that one keeps going back for more. I consider harvesting the power of addictive point systems for the good of mankind to be a praiseworthy accomplishment, but I must refrain myself from getting caught up in it. I will, however, not stop myself from asking questions, because however fun that may be it is not as addicting.
Anyway, I just wanted to put in a good word for the new Japanese stackexchange site. If it doesn't get enough traffic it will be shut down, so tell everyone you know who wants to learn Japanese to start asking questions! Or, if you know Japanese, start answering.  The best way to learn is to teach. Plus you get this cool banner:


profile for Nate Glenn at Japanese Language and Usage, Q&A for students, teachers, and linguists wanting to discuss the finer points of the Japanese language