Video Dev Geeks

Enrique Ruiz-Velasco's blog about video technologies and software development.

Tuesday, February 08, 2011

Using CUDA for Neural Network-based Image Enhancing Filter

By end of last year I started looking into NVidia's CUDA technology. I downloaded the SDK here and quickly realized that the parallelism supported by the GPUs is ideal for some of the work I've done before experimenting  with Neural Networks (NN) to enhance images. I quickly ported my previous filter implementation to CUDA and I was able to reduce the processing time quite a lot I would say about 10 times faster. It turns out that CUDA is ideal for NNs because normally NN requires you to perform the same operation on every single block of data so I was able to take advantage of the parallelism (1024 threads or so) to quickly process the data. I'm currently running into some issues associated with accessing the same memory location from multiple threads and in a way parallel programing forces you to stop thinking in terms of sequential loop and figure out how to structure the data in such a way so the operations can be perform simultaneously on all the blocks at once. I expect to solve this issues very soon.

This is a screenshot of the Neural filter in action so far. On the left hand side is the original heavily compressed image. On the right hand side is the NN processed image. As you can see the filter tries to remove some of the blockiness of the original image. I'm still need to fine tune the momentum constant I think, I see that the bias is not reacting as quickly as I would like thus creating some low frequency streaks across.

Overall it looks very promising. Once I add color information the overall perceived quality of the image should improve.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home