Learning and generalization in neural networks with diffusion-limited architectures
We study the ability of neural networks with diffusion-limited configurations, to learn mathematical operations, and after being trained, to generalize to new inputs which are not part of the training set. This type of network configuration is important because real biological neural networks evolve spatially following the rule of diffusion-limited aggregation. In particular, the operations of signal tranduction, differentiation, integration, thresholding, and number representation conversions, are considered. Such operations are essential in the survival of a biological species. For example, differentiation is needed to recognize rates of change while integration is important is quantifying sizes or magnitudes. The study also attempts to clarify a fundamental issue concerning configuration efficiency (least number of neurons and simplest type of neuron interconnections) in generalizing neural networks. Our results show that diffusion-limited networks are trainable and have excellent generalizing capability for mathematical tasks described above.