Parallelizing again

This is the last release I compiled of The Cellular Automata 1D evolver on GPU by CUDA

cev1.2

This is more than 1 year old , optimized for GPU with about 500 cores . The new gpus by nvidia have “only” about 2500 “only” a factor of 5 but for the same price is possible to buy 4000~5000 cores from the amd gpus like 7xxx and soon the 8xxx series . There is also an advancing of the OpenCL supported by AMD , NVIDIA , INTEL ,  I am not sure if it is possible to reach the same computational power by OpenCL and I am sure there are problems on how the different brands implement the language so it is not so easy to write OpenCL programs working for different gpu of the same brand and for different brand but OpenCL let me work with different hardware solutions and perhaps I can reach a factor of 10x using OpenCL ( my main doubt is if it is possible to implement synchronization tricks I use on CUDA  tricks that let me gain a 5x factor of speedup! ).

The first amd card I bought is the gigabyte 7970

gigabyte_7970_IMG_5797

And with its 2048 overclock-able cores can reach about 6~7  times the computational work of my good geforce 480 ( it is a very good card , it worked nights and days for years without hesitations ).

Ok 6x time faster is not enough  for me , not enough to reimplement everything so my plan is to buy another one, a 7970 or 8970 when available and to work with a minimum of 2048+2560 ( or ~2300 )  and an increasing speed of about 10x . This configuration of multi-gpu give me the opportunity to implement another level of parallelization.

The current implementation of the evolver is a multicore level where the memory of the gpu is shared ( there are different levels of shared memory) for all the cores this feature let to each thread to compute value reading the result of other threads (every cell is the result of previous 3 cells ) .  The management of computational resource without shared memory let to expand the system to many levels of parallelization.

multica1

The above image show different triangles each one representing a computational job where the information shared by the computational job is the perimeter of the triangle . The red triangles to be computed need the base of the yellow triangle and the computation proceed reducing its size so there is no other information required. The base of the red triangles can be computed by a yellow triangle which need the information of the 2 side computed by 2 red triangles so we have a dependence where each red triangle need one yellow triangle which need 2 red triangles.

The size of the triangles will depend on the power of the computational units and the power of the transmission channels . It is also possible to recursively split a triangle into sub-triangles and this can be useful if there are different levels of computational units ( multi-gpu , multi-pc , computing grid ).

Given a triangle with a base size of B its perimeter is B*2 and this is the size of the communication in/out for this triangle . The number of cells computed in the triangle is (B/2)^2 .

So given C cells computation over 1 cell communication the size of the triangle by its base B should be B=8*C . This size let you to have no idle time due to synchronization.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s