rithm. Examples of algorithms are the procedures taught in elementary schools for adding and multiplying whole numbers; when these procedures are mechanically applied, they always yield the correct result for any pair of whole numbers. Some algorithms are fast (e.g. multiplication) other are very slow (e.g. factorisation, playing chess). Consider, for example, the following factorisation problem ? x ? = 29083How long would it take you, using paper and pencil, to find the two whole numbers which should be written into the two boxes (the solution is unique)? Probably about one hour. Solving the reverse problem 127 x 129 = ? , again using paper and pencil technique, takes less than a minute. All because we know fast algorithms for multiplication but we do not know equally fast ones for factorisation. What really counts for a ``fast'' or a ``usable'' algorithm, according to the standard definition, is not the actual time taken to multiply a particular pairs of number but the fact that the time does not increase too sharply when we apply the same method to ever larger numbers. The same standard text-book method of multiplication requires little extra work when we switch from two three digit numbers to two thirty digits numbers. By contrast, factoring a thirty digit number using the simplest trial divison method (see inset 1) is about 1013 times more time or memory consuming than factoring a three digit number. The use of computational resources is enormous when we keep increasing the number of digits. The largest number that has been factorised as a mathematical challenge, i.e. a number whose factors were secretly chosen by mathematicians in order to present a challenge to other mathematicians, had 129 digits. No one can even conceive of how one might factorise say thousand-digit numbers; the computation would take much more that the estimated age of the universe. Skipping details of the computational complexity we only mention that computer...