Learning capability of a bifurcated neural network
Abstract
Bifurcation is the process from which several neuronal structures emerge, as for instance, the auditory neural system studied by Evans for which he described that: "the nerve fibers (dendrites and axons) enter the cochlear nucleus in an (1) orderly sequence, (2) bifurcate, and (3) are distributed in an orderly fashion ... ." The quest to relate structure and function in natural (as opposed to artificial) neural systems motivates us to propose the "Bifurcated Neural Network" (BNN) model. The architecture has an orderly sequence in the sense that it is feedforward. The neurons are also distributed in an orderly fashion because they are grouped into different layers. Lastly, the architecture is bifurcated, such that if we count layers starting at L = 0, corresponding to the output layer with a single node, then the number of nodes in the succeeding layer is equal 2L.
The objectives of this paper can be divided into two parts. First, we want to show that the proposed model is capable of doing cognition-related tasks. The mathematical proof is anchored in Vapnik-Chervonenkis (VC) learning theory. We essentially calculate for a bound on the probability of error when it is presented a new set of input vectors. Now if the calculated value is "small" we say that the BNN is trainable, or in a similar sense, the task being taught is leamable. Second, we try to illustrate that the BNN is the simplest network, in terms of number of connectivities, which can perform the tasks mentioned in the preceding paragraph.