In these days the technology seems to evolve in the parallelization direction.

This revolution start with graphic accelleration cards in the general purpose domain where these cards ( CUDA by NVIDIA as example ) prove its very usefull power and evolve through the multicore cpu.

I am absolutely optimistic by this direction thinking on the wide possibilities of a parallel computation. The main reason why this technology will be very powerfull is due to the fact this follow as nature work . In the nature all happen in a parallel way and to work well inside the nature we need to use the same processes , the processes available in nature .

It is interesting to observe the evolution of organism in the earth were the species evolve from a full parallel systems to full parallel systems that simulate a single process system . The human brain is the most evident example were the thought appear as single process ( when we think a sequence of operations to do for example to solve a problem or to reach an objective with a perfect logical sequence ) implemented with a full parallel sub processes.

This is also the reason why the artificial calculators evolve in the opposite direction starting with a single process , simulate what is considered the most powerfull human capability , what make a human different from an animal , we are able to plain a sequence of operations following logical relation. So now we have implemented incredible powerfull serial calculator ( about 4Ghz clock speed! ) and we understood that what we though was the most powerfull thinking process in the human think is not really the most difficult to do .

There are other processes that seem have a central role in the thinking process and these processes are not an human exclusive but are widely used by all animal species .

The first example is the image recognition capacity a very simple process for everyone and for every animal but absolutely impossible to implement nowaday in a serial calculator.

It is incredible that what we think is a very simple stupid process is what we are not able to reproduce and this fact bring human very close to a mouse despite the historical maniac idea the human has a central role in the universe.

A full parallel calculator can open a new door breaking some limits of artificial problem solvers.

The human seem very powerfull to solve problem in a non-deterministic domain , for example to solve optimization problems , pattern recognition and in general in all NP problem where we have not a P solution, where we don’t know a P solution . Here human has a lot of chance to be able to behave better than artifical mahcines.

A NP problem is a program can be execute in polynomial time using a non-deterministic Turing machine (NDTM) . A machine like that is only a conceptual abstract construction becouse does not exist a phisical machine like that at the moment ( quantum phisic in the most optimistic expectation can solve this problem but I don’t trust this will be tha case ) .

This machine is able to go in different states in each step of computation and for example is able to count N bit in O(N) steps instead of O(2^N) steps of a serial calculator or deterministic Turing machine (DTM). This machine is able to solve the SAT ( the staisfaction problem ) problem in a polynomial time!

In this domain the human is very brillant but with a full-parallel calculator we can simulate ( with a costant limit) a NDTM machine and this is the most important drawback of a parallel computation. We can avoid the gap where the humans seem excells.

Is also possible to simulate a NDTM using a serial calculator but in this way we are not able to use the full computational power available in nature becouse in a serial process we can compute about 4*10^9 operations at second with a convetional processor.

In a parallel domain what we can do for example with 4Gb of ram with a clock of 1Ghz is (4*10^9)*10^9 !

We can make 10^9 times more computations using ram with only 1Ghz of clock !

And increasing the amount of ram we increase also the computational power ! (This is not possible in a serial calculator )

What is the difference ?

In the first case we can compare at each step of the computation , the result with any other result in the serial process , in the second case what we lose is this possibility, we can not compare intermediate result but we don’t need this to implement a NDTM! .

So this is the important feature of parallel computation: **we can approximate very well a NDTM **

Now it is only the beginning we need to completely change the hardware architecture , the main problem is the concept of bus where every information must transit to go from memory to cpu to device , we need a system without difference between memory and elaboration unit …

I have great expectation by this field.