Sunday, January 04, 2009

How we learn

A theory. Which I have, and that is mine. A-hem.

Yesterday I happened upon another paper about how infant learning takes place...it has long seemed to me that those who write about this didn't have kids. So they're inventing some kind of explanation. This particular paper was about learning language. Are we born with the ability to process or language, or is that a learned thing? (i.e., the classic "nature vs nurture" argument).

What I think we are born with is a pattern-matching feedback system. That's all.

In fact, I think that's all you need. Well, plus some chemical enjoyment feedback when patterns are matched and repeated, for positive reinforcement.

Think of it this way: what do babies do? They watch things. They wave their arms and feet a lot. They put things in the mouth (because the first thing that goes in their mouth makes them happy--getting fed). Imagine an audio and video pattern matcher, completely untrained, just receiving a huge amount of input all the time. Mostly it's junk, but it's not random junk--it's the parents, and the rooms in the house. Not much changes there, so there's lots of time for reinforcing imagery and sound.

Imagine that the waving of arms and feet is all essentially random motion--random neural firings (what else can it be, really?). But the mid-brain is aware of the nerve-firings that cause the arms to move, and the eyes will eventually see the motion, and the feedback connection will be made that leads to awareness of causality: "these nerves and thoughts lead to this visible motion". Patterns will match, reinforcement occurs, and thus memory.

Recall why it is that a deaf person doesn't learn to speak: the feedback loop for the pattern-matching is broken. No feedback, no learning, because the muscular control can't be tested.

Learning is all about pattern-matching. Learning something new is usually based on matching an existing pattern. Otherwise it takes a lot longer, because you have to generate new base patterns. Thus we always start with simple things.

Analogical reasoning is all about matching a pattern, and extending it.

So why haven't we used this approach to teach a computer to speak like we do? Beats me. Probably because it would take just as long as teaching a baby--years. So we have tended to take different approaches that go from zero to sixty in one leap, rather than zero to one to two to three...

We learn things when we are ready to learn them. Which means that we have to have enough base patterns to correlate against.

Some people are better about doing this pattern-match than others. They learn faster and earlier. We learn most things by watching others. I have personally observed this in action a couple of places: #1 being the DC Metro subway system. Watch someone who wasn't born here and doesn't read/speak english try to figure out how to use a farecard machine. Can only be done by watching what others do to get one--because you can't get through the turnstile without it. You observe what someone else does, then you try to do it too. Receiving a farecard = success, you are happy because you have the card, so there is positive reinforcement. Next time, you might have to watch again, but that is reinforcing a pattern, not creating one, so it goes faster.

When you hear someone speak and you don't understand them, you want them to go slower, or talk louder, because your learned patterns aren't being matched. If that person has an accent, you might have to hear unusual words (or words that match the sound but not the context) more than once in order to match the sound AND the context.

You'd think that this is an experimentally verifiable behavior. No one seems to have done (far as I've read, which hasn't covered this topic for a while).

How hard could it be to do it?

No comments: