Artificial Intelligence
May 20, 2007At least some versions of artificial intelligence are attempts not merely to model human intelligence, but to make computers and robots...
Does the human mind work like a computer? If so, what kind of computer? A theory known as connectionism offers a revolutionary perspective on these issues. Ken and John delve into cutting-edge cognitive science with Jay McClelland from Stanford University, an architect of the connectionist view.
Connectionism is an innovative theory about how the mind works, and its based on the way the brain and its neurons work. According to the theory, although each of our individual neurons have very little computational power on their own, they have tremendous computational power when organized in combination with one another. Ken and John are joined by guest, James McClelland to discuss the strengths and weaknesses of the connectionist model.
Understanding the way we learn is an age old problem in psychology and according to McClelland, questions surrounding learning motivate the connectionist position. The old-fashioned, artificial intelligence (AI) model of learning stated that because our brains structured in a particular way the from the day we are born, our thoughts must be pre-structured in particular kinds of ways too. For example human language, it was argued, is pre-specified in our genes. Unfortunately, McClelland argues, this AI approach does not make contact with the fact that the way we talk and interact is shaped by our experiences and the things we’ve learned.
McClelland explains that connectionism took hold in the early 1980s when scientists began making better computer models of neurons and way neurons work together in systems. The connectionist theory of learning is that neuron’s are interconnected, and when neuron’s change connections the brain system learns.
John questions McClelland about the relation between connectionism and an older theory, associationism. McClelland agrees that connectionism is a modern version of the same idea but with one key distinction. Associationism is the theory that associations are formed in our minds when two events occur together; we learn by contiguity, and when something new happens we understand it by generalizing and approximating according to our previous association. According to McClelland, the weakness in the associationist argument is the fact that it doesn’t account for how we learn to re-associate events in our minds. We don’t just approximate to understand new information, we learn new information. The connectionist system learns by adjusting the connections between neurons.
John, Ken and McClelland continue the conversation. They discuss practical applications for connectionist systems in computer science, the effect our emotions have on learning, and some objections to connectionism.