What can artificial neural networks teach us about connectomics?

When children learn to read, they are taught to “sound out” words, or read aloud letter-by-letter. In the 1980s, Terrence Sejnowski and Charles Rosenberg sought to model this process by building a neural network called NETtalk that learned to convert written text to speech1. In other words, they taught a computer2 to read. The authors concluded that NETtalk was too simple to serve as a complete model for human learning, but it does have some important implications for connectomics. Connectomics is the study of wiring-diagrams of the nervous system, or the patterns of connections between neurons. The hope is that by mapping the connectome, we will gain fundamental insights into the function, and dysfunction, of the brain in health and disease. A quick summary of the inner-workings of NETtalk is necessary before we can understand the implications for connectomics.

NETtalk is a simple neural network that consists of 3 layers of nodes: an input layer, a hidden layer, and an output layer (Figure 1). To train the network, written text is fed into the input layer, is then propagated through the hidden layer, and is finally mapped onto a phoneme in the output layer. The letter-to-phoneme correspondence is determined by the connections between nodes, and the network learns the correct correspondence by readjusting the weights of connections between layers. A teacher unit provides feedback, and the network uses this feedback through many iterations of training to adjust its connections.

Figure 1: Schematic of NETtalk network architecture from Sejnowski and Rosenberg (1987).

Continue reading “What can artificial neural networks teach us about connectomics?”