only when each word is spoken slowly in an otherwise silent setting. This type of system is easily confused by back ground noise (Moyne 100). Ben Yuhas' theory is based on the notion that understanding human speech is aided, to some small degree, by reading lips while trying to listen. The emphasis on lip reading is thought to increase as the surrounding noise levels increase. This theory has been applied to speech recognition by adding a system that allows the computer to view the speakers lips through a video analysis system while hearing the speech. The computer, through the neural network, can learn from its mistakes through a training session. Looking at silent video stills of people saying each individual vowel, the network developed a series of images of the different mouth, lip, teeth, and tongue positions. It then compared the video images with the possible sound frequencies and guessed which combination was best. Yuhas then combined the video recognition with the speech recognition systems and input a video frame along with speech that had background noise. The system then estimated the possible sound frequencies from the video and combined the estimates with the actual sound signals. After about 500 trial runs the system was as proficient as a human looking at the same video sequences. This combination of speech recognition and video imaging substantially increases the security factor by not only recognizing a large vocabulary, but also by identifying the individual customer using the system. Current Applications Laboratory advances like Ben Yuhas have already created a steadily increasing market in speech recognition. Speech recognition products are expected to break the billion-dollar sales mark this year for the first time. Only three years ago,...