information. For example, for the visual fields accompany them with spatial processing programs. The auditory systems would need temporal processing programs. Program the robot to avoid things like pain and other harmful stimuli and to embrace necessities. All of this information can be built into a database, much like human memory, so that the robot is able to create a representation and eventually learn how to interact with the environment. In what way would this be different from humans, in that human intentionality is not that different from machine states? Bridgeman voices three problems that he feels demonstrate the invalidity of Searles argument: 1) The human brain only receives and gives out input strings. Aside from this, the brain is deaf, dumb, and blind. It is only a function of electrical impulses (Bridgeman decides to ignore the concept of hormonal levels and their interaction in the brain). 2) Insofar as the brain has things like genetic information and experience that enrich its database, these two aspects do not bring with them intentionality, and therefore, do not challenge the computational argument. The brain must only be characterized by neuronal properties, and to go beyond this is a form of dualism. 3) Searle fails to provide a suitable criterion for how far intentionality can be extended. Searle is willing to appoint intentionality to certain animals such as the ape, but at what junction does this stop? With these three points according to Bridgeman, we are left with a human brain that has an intention-free, genetically determined structure, on which are superimposed the results of storms of tiny nerve signals. He challenges Searles use of mathematics to demonstrate that humans understand something that machines do not. Bridgeman claims that he or any other human does not understand numbers, but that we merely apply a system of rules learned in childhood. This is the same basic system of rules...