. These processors are connected by uniderectional data busses and process only information addressed to them. A centralized processor acts as a traffic cop for data, which is parcelled-out to the neural network and retrieved in its digested form. Logically, the more processors connected in the neural net, the more powerful the system. Like the human brain, neural networks are designed to acquire data through experience, or learning. By providing examples to a neural network expert system, generalizations are made much as they are for your children learning about items (such as chairs, dogs, etc.). Modern neural network system properties include a greatly enhanced computational ability due to the parallelism of their circuitry. They have also proven themselves in fields such as mapping, where minor errors are tolerable, there is alot of example-data, and where rules are generally hard to nail-down. Educating neural networks begins by programming a "backpropigation of error", which is the foundational operating systems that defines the inputs and outputs of the system. The best example I can cite is the Windows operating system from Microsoft. Of-course, personal computers don't learn by example, but Windows-based software will not run outside (or in the absence) of Windows. One negative feature of educating neural networks by "backpropigation of error" is a phenomena known as, "overfitting". "Overfitting" errors occur when conflicting information is memorized, so the neural network exhibits a degraded state of function as a result. At the worst, the expert system may lock-up, but it is more common to see an impeded state of ...